00:00:00.001 Started by upstream project "autotest-per-patch" build number 127215 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.186 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.228 Using shallow fetch with depth 1 00:00:00.228 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.228 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.342 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.356 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.370 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:08.370 > git config core.sparsecheckout # timeout=10 00:00:08.381 > git read-tree -mu HEAD # timeout=10 00:00:08.400 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:08.422 Commit message: "packer: Add bios builder" 00:00:08.423 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:08.506 [Pipeline] Start of Pipeline 00:00:08.524 [Pipeline] library 00:00:08.527 Loading library shm_lib@master 00:00:08.527 Library shm_lib@master is cached. Copying from home. 00:00:08.543 [Pipeline] node 00:00:08.553 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.554 [Pipeline] { 00:00:08.566 [Pipeline] catchError 00:00:08.568 [Pipeline] { 00:00:08.581 [Pipeline] wrap 00:00:08.590 [Pipeline] { 00:00:08.598 [Pipeline] stage 00:00:08.600 [Pipeline] { (Prologue) 00:00:08.812 [Pipeline] sh 00:00:09.097 + logger -p user.info -t JENKINS-CI 00:00:09.117 [Pipeline] echo 00:00:09.119 Node: WFP8 00:00:09.127 [Pipeline] sh 00:00:09.433 [Pipeline] setCustomBuildProperty 00:00:09.449 [Pipeline] echo 00:00:09.451 Cleanup processes 00:00:09.457 [Pipeline] sh 00:00:09.743 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.743 2672886 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.760 [Pipeline] sh 00:00:10.049 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.049 ++ grep -v 'sudo pgrep' 00:00:10.049 ++ awk '{print $1}' 00:00:10.049 + sudo kill -9 00:00:10.049 + true 00:00:10.065 [Pipeline] cleanWs 00:00:10.076 [WS-CLEANUP] Deleting project workspace... 00:00:10.076 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.082 [WS-CLEANUP] done 00:00:10.085 [Pipeline] setCustomBuildProperty 00:00:10.096 [Pipeline] sh 00:00:10.378 + sudo git config --global --replace-all safe.directory '*' 00:00:10.469 [Pipeline] httpRequest 00:00:10.492 [Pipeline] echo 00:00:10.494 Sorcerer 10.211.164.101 is alive 00:00:10.501 [Pipeline] httpRequest 00:00:10.506 HttpMethod: GET 00:00:10.507 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.507 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.535 Response Code: HTTP/1.1 200 OK 00:00:10.535 Success: Status code 200 is in the accepted range: 200,404 00:00:10.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:23.290 [Pipeline] sh 00:00:23.576 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:23.590 [Pipeline] httpRequest 00:00:23.605 [Pipeline] echo 00:00:23.607 Sorcerer 10.211.164.101 is alive 00:00:23.614 [Pipeline] httpRequest 00:00:23.619 HttpMethod: GET 00:00:23.620 URL: http://10.211.164.101/packages/spdk_a14c64d79f2db52c8f9e6cc203161cdd5407184f.tar.gz 00:00:23.620 Sending request to url: http://10.211.164.101/packages/spdk_a14c64d79f2db52c8f9e6cc203161cdd5407184f.tar.gz 00:00:23.644 Response Code: HTTP/1.1 200 OK 00:00:23.645 Success: Status code 200 is in the accepted range: 200,404 00:00:23.646 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a14c64d79f2db52c8f9e6cc203161cdd5407184f.tar.gz 00:01:08.011 [Pipeline] sh 00:01:08.295 + tar --no-same-owner -xf spdk_a14c64d79f2db52c8f9e6cc203161cdd5407184f.tar.gz 00:01:10.920 [Pipeline] sh 00:01:11.205 + git -C spdk log --oneline -n5 00:01:11.205 a14c64d79 raid: allow to skip rebuild when adding a base bdev 00:01:11.205 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:11.205 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:11.205 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:11.205 d005e023b raid: fix empty slot not updated in sb after resize 00:01:11.218 [Pipeline] } 00:01:11.238 [Pipeline] // stage 00:01:11.248 [Pipeline] stage 00:01:11.251 [Pipeline] { (Prepare) 00:01:11.271 [Pipeline] writeFile 00:01:11.287 [Pipeline] sh 00:01:11.572 + logger -p user.info -t JENKINS-CI 00:01:11.585 [Pipeline] sh 00:01:11.870 + logger -p user.info -t JENKINS-CI 00:01:11.884 [Pipeline] sh 00:01:12.168 + cat autorun-spdk.conf 00:01:12.168 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.168 SPDK_TEST_NVMF=1 00:01:12.168 SPDK_TEST_NVME_CLI=1 00:01:12.169 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.169 SPDK_TEST_NVMF_NICS=e810 00:01:12.169 SPDK_TEST_VFIOUSER=1 00:01:12.169 SPDK_RUN_UBSAN=1 00:01:12.169 NET_TYPE=phy 00:01:12.176 RUN_NIGHTLY=0 00:01:12.182 [Pipeline] readFile 00:01:12.213 [Pipeline] withEnv 00:01:12.215 [Pipeline] { 00:01:12.230 [Pipeline] sh 00:01:12.517 + set -ex 00:01:12.517 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.517 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.517 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.517 ++ SPDK_TEST_NVMF=1 00:01:12.517 ++ SPDK_TEST_NVME_CLI=1 00:01:12.517 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.517 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.517 ++ SPDK_TEST_VFIOUSER=1 00:01:12.517 ++ SPDK_RUN_UBSAN=1 00:01:12.517 ++ NET_TYPE=phy 00:01:12.517 ++ RUN_NIGHTLY=0 00:01:12.517 + case $SPDK_TEST_NVMF_NICS in 00:01:12.517 + DRIVERS=ice 00:01:12.517 + [[ tcp == \r\d\m\a ]] 00:01:12.517 + [[ -n ice ]] 00:01:12.517 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.517 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.517 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.517 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.517 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.517 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.517 + true 00:01:12.517 + for D in $DRIVERS 00:01:12.517 + sudo modprobe ice 00:01:12.517 + exit 0 00:01:12.527 [Pipeline] } 00:01:12.546 [Pipeline] // withEnv 00:01:12.553 [Pipeline] } 00:01:12.569 [Pipeline] // stage 00:01:12.579 [Pipeline] catchError 00:01:12.581 [Pipeline] { 00:01:12.599 [Pipeline] timeout 00:01:12.599 Timeout set to expire in 50 min 00:01:12.601 [Pipeline] { 00:01:12.618 [Pipeline] stage 00:01:12.620 [Pipeline] { (Tests) 00:01:12.638 [Pipeline] sh 00:01:12.923 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.923 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.923 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.923 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.923 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.923 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.923 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.923 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.923 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.923 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.923 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.923 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.923 + source /etc/os-release 00:01:12.923 ++ NAME='Fedora Linux' 00:01:12.923 ++ VERSION='38 (Cloud Edition)' 00:01:12.923 ++ ID=fedora 00:01:12.923 ++ VERSION_ID=38 00:01:12.923 ++ VERSION_CODENAME= 00:01:12.923 ++ PLATFORM_ID=platform:f38 00:01:12.923 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.923 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.923 ++ LOGO=fedora-logo-icon 00:01:12.923 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.923 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.923 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.923 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.923 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.923 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.923 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.923 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.923 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.923 ++ SUPPORT_END=2024-05-14 00:01:12.923 ++ VARIANT='Cloud Edition' 00:01:12.923 ++ VARIANT_ID=cloud 00:01:12.923 + uname -a 00:01:12.923 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.923 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.456 Hugepages 00:01:15.456 node hugesize free / total 00:01:15.456 node0 1048576kB 0 / 0 00:01:15.456 node0 2048kB 0 / 0 00:01:15.456 node1 1048576kB 0 / 0 00:01:15.456 node1 2048kB 0 / 0 00:01:15.456 00:01:15.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.456 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.456 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.456 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:15.456 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.456 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.456 + rm -f /tmp/spdk-ld-path 00:01:15.456 + source autorun-spdk.conf 00:01:15.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.456 ++ SPDK_TEST_NVMF=1 00:01:15.456 ++ SPDK_TEST_NVME_CLI=1 00:01:15.456 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.456 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.456 ++ SPDK_TEST_VFIOUSER=1 00:01:15.456 ++ SPDK_RUN_UBSAN=1 00:01:15.456 ++ NET_TYPE=phy 00:01:15.456 ++ RUN_NIGHTLY=0 00:01:15.456 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.456 + [[ -n '' ]] 00:01:15.456 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.456 + for M in /var/spdk/build-*-manifest.txt 00:01:15.456 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.456 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.456 + for M in /var/spdk/build-*-manifest.txt 00:01:15.456 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.456 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.456 ++ uname 00:01:15.456 + [[ Linux == \L\i\n\u\x ]] 00:01:15.456 + sudo dmesg -T 00:01:15.456 + sudo dmesg --clear 00:01:15.456 + dmesg_pid=2674324 00:01:15.456 + [[ Fedora Linux == FreeBSD ]] 00:01:15.456 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.456 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.456 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.456 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.456 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.456 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.456 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.456 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.456 + sudo dmesg -Tw 00:01:15.456 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.456 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.456 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.457 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.457 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.457 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.457 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.457 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.457 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.715 Test configuration: 00:01:15.715 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.715 SPDK_TEST_NVMF=1 00:01:15.715 SPDK_TEST_NVME_CLI=1 00:01:15.715 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.715 SPDK_TEST_NVMF_NICS=e810 00:01:15.715 SPDK_TEST_VFIOUSER=1 00:01:15.715 SPDK_RUN_UBSAN=1 00:01:15.715 NET_TYPE=phy 00:01:15.715 RUN_NIGHTLY=0 13:42:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.715 13:42:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.715 13:42:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.715 13:42:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.715 13:42:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.715 13:42:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.715 13:42:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.715 13:42:42 -- paths/export.sh@5 -- $ export PATH 00:01:15.715 13:42:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.715 13:42:42 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.715 13:42:42 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:15.715 13:42:42 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721994162.XXXXXX 00:01:15.715 13:42:42 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721994162.xhPPxj 00:01:15.715 13:42:42 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:15.715 13:42:42 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:15.715 13:42:42 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.715 13:42:42 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.715 13:42:42 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.715 13:42:42 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:15.715 13:42:42 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:15.715 13:42:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.715 13:42:43 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.715 13:42:43 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:15.715 13:42:43 -- pm/common@17 -- $ local monitor 00:01:15.715 13:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.715 13:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.715 13:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.715 13:42:43 -- pm/common@21 -- $ date +%s 00:01:15.715 13:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.715 13:42:43 -- pm/common@21 -- $ date +%s 00:01:15.715 13:42:43 -- pm/common@25 -- $ sleep 1 00:01:15.715 13:42:43 -- pm/common@21 -- $ date +%s 00:01:15.716 13:42:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721994163 00:01:15.716 13:42:43 -- pm/common@21 -- $ date +%s 00:01:15.716 13:42:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721994163 00:01:15.716 13:42:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721994163 00:01:15.716 13:42:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721994163 00:01:15.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721994163_collect-vmstat.pm.log 00:01:15.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721994163_collect-cpu-load.pm.log 00:01:15.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721994163_collect-cpu-temp.pm.log 00:01:15.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721994163_collect-bmc-pm.bmc.pm.log 00:01:16.651 13:42:44 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:16.651 13:42:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.651 13:42:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.651 13:42:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.651 13:42:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.651 Fri Jul 26 11:42:44 AM UTC 2024 00:01:16.651 13:42:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.651 v24.09-pre-322-ga14c64d79 00:01:16.651 13:42:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.651 13:42:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.651 13:42:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.651 13:42:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:16.651 13:42:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:16.651 13:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.651 ************************************ 00:01:16.651 START TEST ubsan 00:01:16.651 ************************************ 00:01:16.651 13:42:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:16.651 using ubsan 00:01:16.651 00:01:16.651 real 0m0.001s 00:01:16.651 user 0m0.000s 00:01:16.651 sys 0m0.001s 00:01:16.651 13:42:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:16.651 13:42:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.651 ************************************ 00:01:16.651 END TEST ubsan 00:01:16.651 ************************************ 00:01:16.910 13:42:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.910 13:42:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.910 13:42:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.910 13:42:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.910 13:42:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.910 13:42:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.910 13:42:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.910 13:42:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.910 13:42:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:16.910 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.910 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:17.169 Using 'verbs' RDMA provider 00:01:30.322 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.298 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:40.557 Creating mk/config.mk...done. 00:01:40.557 Creating mk/cc.flags.mk...done. 00:01:40.557 Type 'make' to build. 00:01:40.557 13:43:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:40.557 13:43:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:40.557 13:43:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:40.557 13:43:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.557 ************************************ 00:01:40.557 START TEST make 00:01:40.557 ************************************ 00:01:40.557 13:43:07 make -- common/autotest_common.sh@1125 -- $ make -j96 00:01:41.124 make[1]: Nothing to be done for 'all'. 00:01:42.514 The Meson build system 00:01:42.514 Version: 1.3.1 00:01:42.514 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.514 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.514 Build type: native build 00:01:42.514 Project name: libvfio-user 00:01:42.514 Project version: 0.0.1 00:01:42.514 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.514 C linker for the host machine: cc ld.bfd 2.39-16 00:01:42.514 Host machine cpu family: x86_64 00:01:42.514 Host machine cpu: x86_64 00:01:42.514 Run-time dependency threads found: YES 00:01:42.514 Library dl found: YES 00:01:42.514 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.514 Run-time dependency json-c found: YES 0.17 00:01:42.514 Run-time dependency cmocka found: YES 1.1.7 00:01:42.514 Program pytest-3 found: NO 00:01:42.514 Program flake8 found: NO 00:01:42.514 Program misspell-fixer found: NO 00:01:42.514 Program restructuredtext-lint found: NO 00:01:42.514 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.514 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.514 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.514 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.514 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.514 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.514 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.514 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.514 Build targets in project: 8 00:01:42.514 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.514 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.514 00:01:42.514 libvfio-user 0.0.1 00:01:42.514 00:01:42.514 User defined options 00:01:42.514 buildtype : debug 00:01:42.514 default_library: shared 00:01:42.514 libdir : /usr/local/lib 00:01:42.514 00:01:42.514 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.772 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.772 [1/37] Compiling C object samples/null.p/null.c.o 00:01:42.772 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:42.772 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.772 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:42.772 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.772 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:42.772 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.772 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:42.772 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:42.772 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.772 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:42.772 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:42.772 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:42.772 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:42.772 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:42.772 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:42.772 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:42.772 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:42.772 [19/37] Compiling C object samples/server.p/server.c.o 00:01:42.772 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:42.772 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:42.772 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:42.772 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:42.772 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:42.772 [25/37] Compiling C object samples/client.p/client.c.o 00:01:42.772 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.030 [27/37] Linking target samples/client 00:01:43.030 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.030 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.030 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.030 [31/37] Linking target test/unit_tests 00:01:43.030 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.289 [33/37] Linking target samples/server 00:01:43.289 [34/37] Linking target samples/gpio-pci-idio-16 00:01:43.289 [35/37] Linking target samples/null 00:01:43.289 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:43.289 [37/37] Linking target samples/lspci 00:01:43.289 INFO: autodetecting backend as ninja 00:01:43.289 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.289 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.548 ninja: no work to do. 00:01:48.817 The Meson build system 00:01:48.817 Version: 1.3.1 00:01:48.817 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:48.817 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:48.817 Build type: native build 00:01:48.817 Program cat found: YES (/usr/bin/cat) 00:01:48.817 Project name: DPDK 00:01:48.817 Project version: 24.03.0 00:01:48.817 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.817 C linker for the host machine: cc ld.bfd 2.39-16 00:01:48.817 Host machine cpu family: x86_64 00:01:48.817 Host machine cpu: x86_64 00:01:48.817 Message: ## Building in Developer Mode ## 00:01:48.817 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.817 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.817 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.817 Program python3 found: YES (/usr/bin/python3) 00:01:48.817 Program cat found: YES (/usr/bin/cat) 00:01:48.817 Compiler for C supports arguments -march=native: YES 00:01:48.817 Checking for size of "void *" : 8 00:01:48.817 Checking for size of "void *" : 8 (cached) 00:01:48.817 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:48.817 Library m found: YES 00:01:48.817 Library numa found: YES 00:01:48.817 Has header "numaif.h" : YES 00:01:48.818 Library fdt found: NO 00:01:48.818 Library execinfo found: NO 00:01:48.818 Has header "execinfo.h" : YES 00:01:48.818 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.818 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.818 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.818 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.818 Run-time dependency openssl found: YES 3.0.9 00:01:48.818 Run-time dependency libpcap found: YES 1.10.4 00:01:48.818 Has header "pcap.h" with dependency libpcap: YES 00:01:48.818 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.818 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.818 Compiler for C supports arguments -Wformat: YES 00:01:48.818 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.818 Compiler for C supports arguments -Wformat-security: NO 00:01:48.818 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.818 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.818 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.818 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.818 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.818 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.818 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.818 Compiler for C supports arguments -Wundef: YES 00:01:48.818 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.818 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.818 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.818 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.818 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.818 Program objdump found: YES (/usr/bin/objdump) 00:01:48.818 Compiler for C supports arguments -mavx512f: YES 00:01:48.818 Checking if "AVX512 checking" compiles: YES 00:01:48.818 Fetching value of define "__SSE4_2__" : 1 00:01:48.818 Fetching value of define "__AES__" : 1 00:01:48.818 Fetching value of define "__AVX__" : 1 00:01:48.818 Fetching value of define "__AVX2__" : 1 00:01:48.818 Fetching value of define "__AVX512BW__" : 1 00:01:48.818 Fetching value of define "__AVX512CD__" : 1 00:01:48.818 Fetching value of define "__AVX512DQ__" : 1 00:01:48.818 Fetching value of define "__AVX512F__" : 1 00:01:48.818 Fetching value of define "__AVX512VL__" : 1 00:01:48.818 Fetching value of define "__PCLMUL__" : 1 00:01:48.818 Fetching value of define "__RDRND__" : 1 00:01:48.818 Fetching value of define "__RDSEED__" : 1 00:01:48.818 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.818 Fetching value of define "__znver1__" : (undefined) 00:01:48.818 Fetching value of define "__znver2__" : (undefined) 00:01:48.818 Fetching value of define "__znver3__" : (undefined) 00:01:48.818 Fetching value of define "__znver4__" : (undefined) 00:01:48.818 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.818 Message: lib/log: Defining dependency "log" 00:01:48.818 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.818 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.818 Checking for function "getentropy" : NO 00:01:48.818 Message: lib/eal: Defining dependency "eal" 00:01:48.818 Message: lib/ring: Defining dependency "ring" 00:01:48.818 Message: lib/rcu: Defining dependency "rcu" 00:01:48.818 Message: lib/mempool: Defining dependency "mempool" 00:01:48.818 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.818 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.818 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.818 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.818 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.818 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.818 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.818 Compiler for C supports arguments -mpclmul: YES 00:01:48.818 Compiler for C supports arguments -maes: YES 00:01:48.818 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.818 Compiler for C supports arguments -mavx512bw: YES 00:01:48.818 Compiler for C supports arguments -mavx512dq: YES 00:01:48.818 Compiler for C supports arguments -mavx512vl: YES 00:01:48.818 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.818 Compiler for C supports arguments -mavx2: YES 00:01:48.818 Compiler for C supports arguments -mavx: YES 00:01:48.818 Message: lib/net: Defining dependency "net" 00:01:48.818 Message: lib/meter: Defining dependency "meter" 00:01:48.818 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.818 Message: lib/pci: Defining dependency "pci" 00:01:48.818 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.818 Message: lib/hash: Defining dependency "hash" 00:01:48.818 Message: lib/timer: Defining dependency "timer" 00:01:48.818 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.818 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.818 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.818 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.818 Message: lib/power: Defining dependency "power" 00:01:48.818 Message: lib/reorder: Defining dependency "reorder" 00:01:48.818 Message: lib/security: Defining dependency "security" 00:01:48.818 Has header "linux/userfaultfd.h" : YES 00:01:48.818 Has header "linux/vduse.h" : YES 00:01:48.818 Message: lib/vhost: Defining dependency "vhost" 00:01:48.818 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.818 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.818 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.818 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.818 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.818 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.818 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.818 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.818 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.818 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.818 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.818 Configuring doxy-api-html.conf using configuration 00:01:48.818 Configuring doxy-api-man.conf using configuration 00:01:48.818 Program mandb found: YES (/usr/bin/mandb) 00:01:48.818 Program sphinx-build found: NO 00:01:48.818 Configuring rte_build_config.h using configuration 00:01:48.818 Message: 00:01:48.818 ================= 00:01:48.818 Applications Enabled 00:01:48.818 ================= 00:01:48.818 00:01:48.818 apps: 00:01:48.818 00:01:48.818 00:01:48.818 Message: 00:01:48.818 ================= 00:01:48.818 Libraries Enabled 00:01:48.818 ================= 00:01:48.818 00:01:48.818 libs: 00:01:48.818 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.818 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.818 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.818 00:01:48.818 Message: 00:01:48.818 =============== 00:01:48.818 Drivers Enabled 00:01:48.818 =============== 00:01:48.818 00:01:48.818 common: 00:01:48.818 00:01:48.818 bus: 00:01:48.818 pci, vdev, 00:01:48.818 mempool: 00:01:48.818 ring, 00:01:48.818 dma: 00:01:48.818 00:01:48.818 net: 00:01:48.818 00:01:48.818 crypto: 00:01:48.818 00:01:48.818 compress: 00:01:48.818 00:01:48.818 vdpa: 00:01:48.818 00:01:48.818 00:01:48.818 Message: 00:01:48.818 ================= 00:01:48.818 Content Skipped 00:01:48.818 ================= 00:01:48.818 00:01:48.818 apps: 00:01:48.818 dumpcap: explicitly disabled via build config 00:01:48.818 graph: explicitly disabled via build config 00:01:48.818 pdump: explicitly disabled via build config 00:01:48.818 proc-info: explicitly disabled via build config 00:01:48.818 test-acl: explicitly disabled via build config 00:01:48.818 test-bbdev: explicitly disabled via build config 00:01:48.818 test-cmdline: explicitly disabled via build config 00:01:48.818 test-compress-perf: explicitly disabled via build config 00:01:48.818 test-crypto-perf: explicitly disabled via build config 00:01:48.818 test-dma-perf: explicitly disabled via build config 00:01:48.818 test-eventdev: explicitly disabled via build config 00:01:48.818 test-fib: explicitly disabled via build config 00:01:48.818 test-flow-perf: explicitly disabled via build config 00:01:48.818 test-gpudev: explicitly disabled via build config 00:01:48.818 test-mldev: explicitly disabled via build config 00:01:48.818 test-pipeline: explicitly disabled via build config 00:01:48.818 test-pmd: explicitly disabled via build config 00:01:48.818 test-regex: explicitly disabled via build config 00:01:48.818 test-sad: explicitly disabled via build config 00:01:48.818 test-security-perf: explicitly disabled via build config 00:01:48.818 00:01:48.818 libs: 00:01:48.818 argparse: explicitly disabled via build config 00:01:48.818 metrics: explicitly disabled via build config 00:01:48.818 acl: explicitly disabled via build config 00:01:48.818 bbdev: explicitly disabled via build config 00:01:48.818 bitratestats: explicitly disabled via build config 00:01:48.818 bpf: explicitly disabled via build config 00:01:48.818 cfgfile: explicitly disabled via build config 00:01:48.818 distributor: explicitly disabled via build config 00:01:48.818 efd: explicitly disabled via build config 00:01:48.818 eventdev: explicitly disabled via build config 00:01:48.818 dispatcher: explicitly disabled via build config 00:01:48.819 gpudev: explicitly disabled via build config 00:01:48.819 gro: explicitly disabled via build config 00:01:48.819 gso: explicitly disabled via build config 00:01:48.819 ip_frag: explicitly disabled via build config 00:01:48.819 jobstats: explicitly disabled via build config 00:01:48.819 latencystats: explicitly disabled via build config 00:01:48.819 lpm: explicitly disabled via build config 00:01:48.819 member: explicitly disabled via build config 00:01:48.819 pcapng: explicitly disabled via build config 00:01:48.819 rawdev: explicitly disabled via build config 00:01:48.819 regexdev: explicitly disabled via build config 00:01:48.819 mldev: explicitly disabled via build config 00:01:48.819 rib: explicitly disabled via build config 00:01:48.819 sched: explicitly disabled via build config 00:01:48.819 stack: explicitly disabled via build config 00:01:48.819 ipsec: explicitly disabled via build config 00:01:48.819 pdcp: explicitly disabled via build config 00:01:48.819 fib: explicitly disabled via build config 00:01:48.819 port: explicitly disabled via build config 00:01:48.819 pdump: explicitly disabled via build config 00:01:48.819 table: explicitly disabled via build config 00:01:48.819 pipeline: explicitly disabled via build config 00:01:48.819 graph: explicitly disabled via build config 00:01:48.819 node: explicitly disabled via build config 00:01:48.819 00:01:48.819 drivers: 00:01:48.819 common/cpt: not in enabled drivers build config 00:01:48.819 common/dpaax: not in enabled drivers build config 00:01:48.819 common/iavf: not in enabled drivers build config 00:01:48.819 common/idpf: not in enabled drivers build config 00:01:48.819 common/ionic: not in enabled drivers build config 00:01:48.819 common/mvep: not in enabled drivers build config 00:01:48.819 common/octeontx: not in enabled drivers build config 00:01:48.819 bus/auxiliary: not in enabled drivers build config 00:01:48.819 bus/cdx: not in enabled drivers build config 00:01:48.819 bus/dpaa: not in enabled drivers build config 00:01:48.819 bus/fslmc: not in enabled drivers build config 00:01:48.819 bus/ifpga: not in enabled drivers build config 00:01:48.819 bus/platform: not in enabled drivers build config 00:01:48.819 bus/uacce: not in enabled drivers build config 00:01:48.819 bus/vmbus: not in enabled drivers build config 00:01:48.819 common/cnxk: not in enabled drivers build config 00:01:48.819 common/mlx5: not in enabled drivers build config 00:01:48.819 common/nfp: not in enabled drivers build config 00:01:48.819 common/nitrox: not in enabled drivers build config 00:01:48.819 common/qat: not in enabled drivers build config 00:01:48.819 common/sfc_efx: not in enabled drivers build config 00:01:48.819 mempool/bucket: not in enabled drivers build config 00:01:48.819 mempool/cnxk: not in enabled drivers build config 00:01:48.819 mempool/dpaa: not in enabled drivers build config 00:01:48.819 mempool/dpaa2: not in enabled drivers build config 00:01:48.819 mempool/octeontx: not in enabled drivers build config 00:01:48.819 mempool/stack: not in enabled drivers build config 00:01:48.819 dma/cnxk: not in enabled drivers build config 00:01:48.819 dma/dpaa: not in enabled drivers build config 00:01:48.819 dma/dpaa2: not in enabled drivers build config 00:01:48.819 dma/hisilicon: not in enabled drivers build config 00:01:48.819 dma/idxd: not in enabled drivers build config 00:01:48.819 dma/ioat: not in enabled drivers build config 00:01:48.819 dma/skeleton: not in enabled drivers build config 00:01:48.819 net/af_packet: not in enabled drivers build config 00:01:48.819 net/af_xdp: not in enabled drivers build config 00:01:48.819 net/ark: not in enabled drivers build config 00:01:48.819 net/atlantic: not in enabled drivers build config 00:01:48.819 net/avp: not in enabled drivers build config 00:01:48.819 net/axgbe: not in enabled drivers build config 00:01:48.819 net/bnx2x: not in enabled drivers build config 00:01:48.819 net/bnxt: not in enabled drivers build config 00:01:48.819 net/bonding: not in enabled drivers build config 00:01:48.819 net/cnxk: not in enabled drivers build config 00:01:48.819 net/cpfl: not in enabled drivers build config 00:01:48.819 net/cxgbe: not in enabled drivers build config 00:01:48.819 net/dpaa: not in enabled drivers build config 00:01:48.819 net/dpaa2: not in enabled drivers build config 00:01:48.819 net/e1000: not in enabled drivers build config 00:01:48.819 net/ena: not in enabled drivers build config 00:01:48.819 net/enetc: not in enabled drivers build config 00:01:48.819 net/enetfec: not in enabled drivers build config 00:01:48.819 net/enic: not in enabled drivers build config 00:01:48.819 net/failsafe: not in enabled drivers build config 00:01:48.819 net/fm10k: not in enabled drivers build config 00:01:48.819 net/gve: not in enabled drivers build config 00:01:48.819 net/hinic: not in enabled drivers build config 00:01:48.819 net/hns3: not in enabled drivers build config 00:01:48.819 net/i40e: not in enabled drivers build config 00:01:48.819 net/iavf: not in enabled drivers build config 00:01:48.819 net/ice: not in enabled drivers build config 00:01:48.819 net/idpf: not in enabled drivers build config 00:01:48.819 net/igc: not in enabled drivers build config 00:01:48.819 net/ionic: not in enabled drivers build config 00:01:48.819 net/ipn3ke: not in enabled drivers build config 00:01:48.819 net/ixgbe: not in enabled drivers build config 00:01:48.819 net/mana: not in enabled drivers build config 00:01:48.819 net/memif: not in enabled drivers build config 00:01:48.819 net/mlx4: not in enabled drivers build config 00:01:48.819 net/mlx5: not in enabled drivers build config 00:01:48.819 net/mvneta: not in enabled drivers build config 00:01:48.819 net/mvpp2: not in enabled drivers build config 00:01:48.819 net/netvsc: not in enabled drivers build config 00:01:48.819 net/nfb: not in enabled drivers build config 00:01:48.819 net/nfp: not in enabled drivers build config 00:01:48.819 net/ngbe: not in enabled drivers build config 00:01:48.819 net/null: not in enabled drivers build config 00:01:48.819 net/octeontx: not in enabled drivers build config 00:01:48.819 net/octeon_ep: not in enabled drivers build config 00:01:48.819 net/pcap: not in enabled drivers build config 00:01:48.819 net/pfe: not in enabled drivers build config 00:01:48.819 net/qede: not in enabled drivers build config 00:01:48.819 net/ring: not in enabled drivers build config 00:01:48.819 net/sfc: not in enabled drivers build config 00:01:48.819 net/softnic: not in enabled drivers build config 00:01:48.819 net/tap: not in enabled drivers build config 00:01:48.819 net/thunderx: not in enabled drivers build config 00:01:48.819 net/txgbe: not in enabled drivers build config 00:01:48.819 net/vdev_netvsc: not in enabled drivers build config 00:01:48.819 net/vhost: not in enabled drivers build config 00:01:48.819 net/virtio: not in enabled drivers build config 00:01:48.819 net/vmxnet3: not in enabled drivers build config 00:01:48.819 raw/*: missing internal dependency, "rawdev" 00:01:48.819 crypto/armv8: not in enabled drivers build config 00:01:48.819 crypto/bcmfs: not in enabled drivers build config 00:01:48.819 crypto/caam_jr: not in enabled drivers build config 00:01:48.819 crypto/ccp: not in enabled drivers build config 00:01:48.819 crypto/cnxk: not in enabled drivers build config 00:01:48.819 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.819 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.819 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.819 crypto/mlx5: not in enabled drivers build config 00:01:48.819 crypto/mvsam: not in enabled drivers build config 00:01:48.819 crypto/nitrox: not in enabled drivers build config 00:01:48.819 crypto/null: not in enabled drivers build config 00:01:48.819 crypto/octeontx: not in enabled drivers build config 00:01:48.819 crypto/openssl: not in enabled drivers build config 00:01:48.819 crypto/scheduler: not in enabled drivers build config 00:01:48.819 crypto/uadk: not in enabled drivers build config 00:01:48.819 crypto/virtio: not in enabled drivers build config 00:01:48.819 compress/isal: not in enabled drivers build config 00:01:48.819 compress/mlx5: not in enabled drivers build config 00:01:48.819 compress/nitrox: not in enabled drivers build config 00:01:48.819 compress/octeontx: not in enabled drivers build config 00:01:48.819 compress/zlib: not in enabled drivers build config 00:01:48.819 regex/*: missing internal dependency, "regexdev" 00:01:48.819 ml/*: missing internal dependency, "mldev" 00:01:48.819 vdpa/ifc: not in enabled drivers build config 00:01:48.819 vdpa/mlx5: not in enabled drivers build config 00:01:48.819 vdpa/nfp: not in enabled drivers build config 00:01:48.819 vdpa/sfc: not in enabled drivers build config 00:01:48.819 event/*: missing internal dependency, "eventdev" 00:01:48.819 baseband/*: missing internal dependency, "bbdev" 00:01:48.819 gpu/*: missing internal dependency, "gpudev" 00:01:48.819 00:01:48.819 00:01:48.819 Build targets in project: 85 00:01:48.819 00:01:48.819 DPDK 24.03.0 00:01:48.819 00:01:48.819 User defined options 00:01:48.819 buildtype : debug 00:01:48.819 default_library : shared 00:01:48.819 libdir : lib 00:01:48.819 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.819 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:48.819 c_link_args : 00:01:48.819 cpu_instruction_set: native 00:01:48.819 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:48.820 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:48.820 enable_docs : false 00:01:48.820 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.820 enable_kmods : false 00:01:48.820 max_lcores : 128 00:01:48.820 tests : false 00:01:48.820 00:01:48.820 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.088 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.088 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.352 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.352 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.352 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.352 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.352 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.352 [7/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.352 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.352 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.352 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.352 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.352 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.353 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.353 [14/268] Linking static target lib/librte_log.a 00:01:49.353 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.353 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.353 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.353 [18/268] Linking static target lib/librte_kvargs.a 00:01:49.353 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.353 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.353 [21/268] Linking static target lib/librte_pci.a 00:01:49.614 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.614 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:49.614 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:49.614 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:49.614 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:49.614 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.614 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:49.614 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.614 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:49.614 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.876 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.876 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.876 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.876 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.876 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.876 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.876 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.876 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:49.876 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.876 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.876 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.876 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.876 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.876 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.876 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.876 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:49.876 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.876 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:49.876 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.876 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.876 [52/268] Linking static target lib/librte_meter.a 00:01:49.876 [53/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.876 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.876 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.876 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:49.876 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.876 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.876 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:49.876 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.876 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.876 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:49.876 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.876 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.876 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.876 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.876 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.876 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.876 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.876 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:49.876 [71/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.876 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.876 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.876 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.876 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:49.876 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:49.876 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:49.876 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.876 [79/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.876 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.876 [81/268] Linking static target lib/librte_telemetry.a 00:01:49.876 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.876 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.876 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.876 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:49.876 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.876 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.876 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.876 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.876 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:49.876 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.876 [92/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.876 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.876 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.876 [95/268] Linking static target lib/librte_ring.a 00:01:49.876 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.876 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.876 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.876 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:49.876 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.876 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.876 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.876 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.876 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.876 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.876 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.876 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.876 [108/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.876 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.876 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.876 [111/268] Linking static target lib/librte_mempool.a 00:01:49.876 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.876 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:49.876 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.876 [115/268] Linking static target lib/librte_net.a 00:01:49.876 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.876 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.877 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.877 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.877 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.877 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.135 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.135 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.135 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.135 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:50.135 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.135 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.135 [128/268] Linking static target lib/librte_rcu.a 00:01:50.135 [129/268] Linking static target lib/librte_eal.a 00:01:50.135 [130/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.135 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.135 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.135 [133/268] Linking target lib/librte_log.so.24.1 00:01:50.135 [134/268] Linking static target lib/librte_cmdline.a 00:01:50.135 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.135 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.135 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.135 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.135 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.135 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:50.135 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.135 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.135 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:50.135 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.135 [145/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:50.135 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:50.135 [147/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.135 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:50.135 [149/268] Linking target lib/librte_kvargs.so.24.1 00:01:50.135 [150/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.135 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.135 [152/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.395 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.395 [154/268] Linking static target lib/librte_mbuf.a 00:01:50.395 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.395 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.395 [157/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.395 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.395 [159/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.395 [160/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.395 [161/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.395 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.395 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:50.395 [164/268] Linking static target lib/librte_timer.a 00:01:50.395 [165/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.395 [166/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.395 [167/268] Linking static target lib/librte_reorder.a 00:01:50.395 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.395 [169/268] Linking target lib/librte_telemetry.so.24.1 00:01:50.395 [170/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:50.395 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.395 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.395 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.395 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.395 [175/268] Linking static target lib/librte_dmadev.a 00:01:50.395 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.395 [177/268] Linking static target lib/librte_power.a 00:01:50.395 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.395 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.395 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.395 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.395 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.395 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.395 [184/268] Linking static target lib/librte_compressdev.a 00:01:50.395 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.395 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.395 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.395 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.395 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.395 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.395 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.395 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:50.395 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.395 [194/268] Linking static target lib/librte_hash.a 00:01:50.395 [195/268] Linking static target lib/librte_security.a 00:01:50.395 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.395 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.654 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.654 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.654 [200/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.654 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.654 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.654 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.654 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.654 [205/268] Linking static target drivers/librte_mempool_ring.a 00:01:50.654 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:50.654 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.654 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.654 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.654 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:50.654 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.654 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.654 [213/268] Linking static target lib/librte_cryptodev.a 00:01:50.654 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.654 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.913 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.913 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.913 [218/268] Linking static target lib/librte_ethdev.a 00:01:50.913 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.913 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.171 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.171 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.171 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.171 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.171 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.431 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.431 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.999 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:51.999 [229/268] Linking static target lib/librte_vhost.a 00:01:52.568 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.947 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.246 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.826 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.084 [234/268] Linking target lib/librte_eal.so.24.1 00:02:00.084 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:00.084 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:00.084 [237/268] Linking target lib/librte_pci.so.24.1 00:02:00.084 [238/268] Linking target lib/librte_timer.so.24.1 00:02:00.084 [239/268] Linking target lib/librte_ring.so.24.1 00:02:00.084 [240/268] Linking target lib/librte_meter.so.24.1 00:02:00.084 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:00.344 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:00.344 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:00.344 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:00.344 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:00.344 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:00.344 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:00.344 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:00.344 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:00.604 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:00.604 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.604 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:00.604 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:00.604 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:00.604 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:00.604 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:00.604 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:00.604 [258/268] Linking target lib/librte_net.so.24.1 00:02:00.863 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:00.863 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:00.863 [261/268] Linking target lib/librte_hash.so.24.1 00:02:00.863 [262/268] Linking target lib/librte_security.so.24.1 00:02:00.863 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:00.863 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:00.863 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:01.122 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.122 [267/268] Linking target lib/librte_power.so.24.1 00:02:01.122 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:01.122 INFO: autodetecting backend as ninja 00:02:01.122 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:02.060 CC lib/ut_mock/mock.o 00:02:02.060 CC lib/log/log.o 00:02:02.060 CC lib/log/log_flags.o 00:02:02.060 CC lib/log/log_deprecated.o 00:02:02.060 CC lib/ut/ut.o 00:02:02.060 LIB libspdk_ut_mock.a 00:02:02.319 LIB libspdk_log.a 00:02:02.319 SO libspdk_ut_mock.so.6.0 00:02:02.319 LIB libspdk_ut.a 00:02:02.319 SO libspdk_log.so.7.0 00:02:02.319 SYMLINK libspdk_ut_mock.so 00:02:02.319 SO libspdk_ut.so.2.0 00:02:02.319 SYMLINK libspdk_log.so 00:02:02.319 SYMLINK libspdk_ut.so 00:02:02.579 CC lib/dma/dma.o 00:02:02.579 CC lib/ioat/ioat.o 00:02:02.579 CC lib/util/base64.o 00:02:02.579 CC lib/util/bit_array.o 00:02:02.579 CC lib/util/cpuset.o 00:02:02.579 CC lib/util/crc16.o 00:02:02.579 CC lib/util/crc32.o 00:02:02.579 CC lib/util/crc32c.o 00:02:02.579 CC lib/util/crc32_ieee.o 00:02:02.579 CC lib/util/crc64.o 00:02:02.579 CC lib/util/dif.o 00:02:02.579 CC lib/util/fd.o 00:02:02.579 CC lib/util/fd_group.o 00:02:02.580 CC lib/util/file.o 00:02:02.580 CC lib/util/hexlify.o 00:02:02.580 CC lib/util/math.o 00:02:02.580 CC lib/util/iov.o 00:02:02.580 CC lib/util/net.o 00:02:02.580 CC lib/util/pipe.o 00:02:02.580 CC lib/util/strerror_tls.o 00:02:02.580 CC lib/util/uuid.o 00:02:02.580 CC lib/util/string.o 00:02:02.580 CC lib/util/xor.o 00:02:02.580 CC lib/util/zipf.o 00:02:02.580 CXX lib/trace_parser/trace.o 00:02:02.839 LIB libspdk_dma.a 00:02:02.839 CC lib/vfio_user/host/vfio_user.o 00:02:02.839 CC lib/vfio_user/host/vfio_user_pci.o 00:02:02.839 SO libspdk_dma.so.4.0 00:02:02.839 LIB libspdk_ioat.a 00:02:02.839 SYMLINK libspdk_dma.so 00:02:02.839 SO libspdk_ioat.so.7.0 00:02:02.839 SYMLINK libspdk_ioat.so 00:02:02.839 LIB libspdk_vfio_user.a 00:02:03.098 SO libspdk_vfio_user.so.5.0 00:02:03.098 LIB libspdk_util.a 00:02:03.098 SYMLINK libspdk_vfio_user.so 00:02:03.098 SO libspdk_util.so.10.0 00:02:03.098 SYMLINK libspdk_util.so 00:02:03.358 LIB libspdk_trace_parser.a 00:02:03.358 SO libspdk_trace_parser.so.5.0 00:02:03.358 SYMLINK libspdk_trace_parser.so 00:02:03.358 CC lib/env_dpdk/env.o 00:02:03.358 CC lib/env_dpdk/memory.o 00:02:03.358 CC lib/env_dpdk/pci.o 00:02:03.358 CC lib/rdma_utils/rdma_utils.o 00:02:03.358 CC lib/env_dpdk/threads.o 00:02:03.358 CC lib/env_dpdk/init.o 00:02:03.358 CC lib/env_dpdk/pci_ioat.o 00:02:03.358 CC lib/env_dpdk/pci_virtio.o 00:02:03.358 CC lib/env_dpdk/pci_event.o 00:02:03.358 CC lib/env_dpdk/pci_vmd.o 00:02:03.358 CC lib/env_dpdk/pci_idxd.o 00:02:03.358 CC lib/env_dpdk/sigbus_handler.o 00:02:03.358 CC lib/env_dpdk/pci_dpdk.o 00:02:03.358 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:03.358 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:03.358 CC lib/vmd/led.o 00:02:03.358 CC lib/vmd/vmd.o 00:02:03.358 CC lib/conf/conf.o 00:02:03.358 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:03.358 CC lib/rdma_provider/common.o 00:02:03.358 CC lib/json/json_parse.o 00:02:03.358 CC lib/json/json_util.o 00:02:03.358 CC lib/json/json_write.o 00:02:03.616 CC lib/idxd/idxd.o 00:02:03.616 CC lib/idxd/idxd_kernel.o 00:02:03.616 CC lib/idxd/idxd_user.o 00:02:03.616 LIB libspdk_rdma_provider.a 00:02:03.616 SO libspdk_rdma_provider.so.6.0 00:02:03.616 LIB libspdk_conf.a 00:02:03.616 LIB libspdk_rdma_utils.a 00:02:03.875 SO libspdk_conf.so.6.0 00:02:03.875 SO libspdk_rdma_utils.so.1.0 00:02:03.875 LIB libspdk_json.a 00:02:03.875 SYMLINK libspdk_rdma_provider.so 00:02:03.875 SO libspdk_json.so.6.0 00:02:03.875 SYMLINK libspdk_conf.so 00:02:03.875 SYMLINK libspdk_rdma_utils.so 00:02:03.875 SYMLINK libspdk_json.so 00:02:03.875 LIB libspdk_idxd.a 00:02:03.875 SO libspdk_idxd.so.12.0 00:02:03.875 LIB libspdk_vmd.a 00:02:04.135 SYMLINK libspdk_idxd.so 00:02:04.135 SO libspdk_vmd.so.6.0 00:02:04.135 SYMLINK libspdk_vmd.so 00:02:04.135 CC lib/jsonrpc/jsonrpc_server.o 00:02:04.135 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:04.135 CC lib/jsonrpc/jsonrpc_client.o 00:02:04.135 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:04.394 LIB libspdk_jsonrpc.a 00:02:04.394 SO libspdk_jsonrpc.so.6.0 00:02:04.394 LIB libspdk_env_dpdk.a 00:02:04.394 SYMLINK libspdk_jsonrpc.so 00:02:04.653 SO libspdk_env_dpdk.so.15.0 00:02:04.653 SYMLINK libspdk_env_dpdk.so 00:02:04.653 CC lib/rpc/rpc.o 00:02:04.912 LIB libspdk_rpc.a 00:02:04.912 SO libspdk_rpc.so.6.0 00:02:04.912 SYMLINK libspdk_rpc.so 00:02:05.172 CC lib/trace/trace.o 00:02:05.172 CC lib/keyring/keyring.o 00:02:05.172 CC lib/trace/trace_flags.o 00:02:05.172 CC lib/keyring/keyring_rpc.o 00:02:05.172 CC lib/trace/trace_rpc.o 00:02:05.432 CC lib/notify/notify.o 00:02:05.432 CC lib/notify/notify_rpc.o 00:02:05.432 LIB libspdk_notify.a 00:02:05.432 LIB libspdk_keyring.a 00:02:05.432 SO libspdk_notify.so.6.0 00:02:05.432 LIB libspdk_trace.a 00:02:05.432 SO libspdk_keyring.so.1.0 00:02:05.432 SYMLINK libspdk_notify.so 00:02:05.432 SO libspdk_trace.so.10.0 00:02:05.692 SYMLINK libspdk_keyring.so 00:02:05.692 SYMLINK libspdk_trace.so 00:02:05.955 CC lib/thread/thread.o 00:02:05.955 CC lib/thread/iobuf.o 00:02:05.955 CC lib/sock/sock.o 00:02:05.955 CC lib/sock/sock_rpc.o 00:02:06.214 LIB libspdk_sock.a 00:02:06.214 SO libspdk_sock.so.10.0 00:02:06.214 SYMLINK libspdk_sock.so 00:02:06.474 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:06.474 CC lib/nvme/nvme_ctrlr.o 00:02:06.474 CC lib/nvme/nvme_ns.o 00:02:06.474 CC lib/nvme/nvme_fabric.o 00:02:06.474 CC lib/nvme/nvme_ns_cmd.o 00:02:06.474 CC lib/nvme/nvme_pcie_common.o 00:02:06.474 CC lib/nvme/nvme.o 00:02:06.474 CC lib/nvme/nvme_pcie.o 00:02:06.474 CC lib/nvme/nvme_qpair.o 00:02:06.474 CC lib/nvme/nvme_transport.o 00:02:06.474 CC lib/nvme/nvme_quirks.o 00:02:06.474 CC lib/nvme/nvme_discovery.o 00:02:06.474 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:06.474 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:06.474 CC lib/nvme/nvme_tcp.o 00:02:06.732 CC lib/nvme/nvme_opal.o 00:02:06.732 CC lib/nvme/nvme_io_msg.o 00:02:06.732 CC lib/nvme/nvme_poll_group.o 00:02:06.732 CC lib/nvme/nvme_zns.o 00:02:06.732 CC lib/nvme/nvme_stubs.o 00:02:06.732 CC lib/nvme/nvme_auth.o 00:02:06.732 CC lib/nvme/nvme_cuse.o 00:02:06.732 CC lib/nvme/nvme_vfio_user.o 00:02:06.732 CC lib/nvme/nvme_rdma.o 00:02:06.992 LIB libspdk_thread.a 00:02:06.992 SO libspdk_thread.so.10.1 00:02:06.992 SYMLINK libspdk_thread.so 00:02:07.251 CC lib/virtio/virtio.o 00:02:07.251 CC lib/virtio/virtio_vfio_user.o 00:02:07.251 CC lib/virtio/virtio_vhost_user.o 00:02:07.251 CC lib/virtio/virtio_pci.o 00:02:07.251 CC lib/blob/blobstore.o 00:02:07.251 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.251 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.251 CC lib/blob/zeroes.o 00:02:07.251 CC lib/blob/blob_bs_dev.o 00:02:07.251 CC lib/blob/request.o 00:02:07.251 CC lib/accel/accel_rpc.o 00:02:07.251 CC lib/accel/accel.o 00:02:07.251 CC lib/accel/accel_sw.o 00:02:07.251 CC lib/init/json_config.o 00:02:07.251 CC lib/init/subsystem.o 00:02:07.251 CC lib/init/subsystem_rpc.o 00:02:07.251 CC lib/init/rpc.o 00:02:07.510 LIB libspdk_init.a 00:02:07.510 LIB libspdk_virtio.a 00:02:07.510 LIB libspdk_vfu_tgt.a 00:02:07.510 SO libspdk_init.so.5.0 00:02:07.510 SO libspdk_vfu_tgt.so.3.0 00:02:07.510 SO libspdk_virtio.so.7.0 00:02:07.769 SYMLINK libspdk_init.so 00:02:07.769 SYMLINK libspdk_vfu_tgt.so 00:02:07.769 SYMLINK libspdk_virtio.so 00:02:08.029 CC lib/event/reactor.o 00:02:08.029 CC lib/event/app.o 00:02:08.029 CC lib/event/app_rpc.o 00:02:08.029 CC lib/event/scheduler_static.o 00:02:08.029 CC lib/event/log_rpc.o 00:02:08.029 LIB libspdk_accel.a 00:02:08.029 SO libspdk_accel.so.16.0 00:02:08.029 SYMLINK libspdk_accel.so 00:02:08.290 LIB libspdk_nvme.a 00:02:08.290 LIB libspdk_event.a 00:02:08.290 SO libspdk_nvme.so.13.1 00:02:08.290 SO libspdk_event.so.14.0 00:02:08.290 SYMLINK libspdk_event.so 00:02:08.550 CC lib/bdev/bdev.o 00:02:08.550 CC lib/bdev/bdev_rpc.o 00:02:08.550 CC lib/bdev/bdev_zone.o 00:02:08.550 CC lib/bdev/scsi_nvme.o 00:02:08.550 CC lib/bdev/part.o 00:02:08.550 SYMLINK libspdk_nvme.so 00:02:09.491 LIB libspdk_blob.a 00:02:09.491 SO libspdk_blob.so.11.0 00:02:09.491 SYMLINK libspdk_blob.so 00:02:09.750 CC lib/blobfs/blobfs.o 00:02:09.750 CC lib/blobfs/tree.o 00:02:09.750 CC lib/lvol/lvol.o 00:02:10.319 LIB libspdk_bdev.a 00:02:10.319 LIB libspdk_blobfs.a 00:02:10.319 SO libspdk_bdev.so.16.0 00:02:10.319 SO libspdk_blobfs.so.10.0 00:02:10.319 LIB libspdk_lvol.a 00:02:10.319 SYMLINK libspdk_bdev.so 00:02:10.319 SYMLINK libspdk_blobfs.so 00:02:10.319 SO libspdk_lvol.so.10.0 00:02:10.579 SYMLINK libspdk_lvol.so 00:02:10.579 CC lib/nvmf/ctrlr_discovery.o 00:02:10.579 CC lib/nvmf/ctrlr.o 00:02:10.579 CC lib/nvmf/subsystem.o 00:02:10.579 CC lib/nvmf/ctrlr_bdev.o 00:02:10.579 CC lib/nvmf/nvmf_rpc.o 00:02:10.579 CC lib/nvmf/nvmf.o 00:02:10.579 CC lib/nvmf/transport.o 00:02:10.579 CC lib/ublk/ublk_rpc.o 00:02:10.579 CC lib/ublk/ublk.o 00:02:10.579 CC lib/nvmf/tcp.o 00:02:10.579 CC lib/nvmf/stubs.o 00:02:10.579 CC lib/nvmf/mdns_server.o 00:02:10.579 CC lib/nvmf/vfio_user.o 00:02:10.579 CC lib/ftl/ftl_core.o 00:02:10.579 CC lib/nvmf/rdma.o 00:02:10.579 CC lib/ftl/ftl_layout.o 00:02:10.579 CC lib/ftl/ftl_init.o 00:02:10.579 CC lib/nvmf/auth.o 00:02:10.579 CC lib/ftl/ftl_debug.o 00:02:10.579 CC lib/ftl/ftl_l2p.o 00:02:10.579 CC lib/ftl/ftl_io.o 00:02:10.579 CC lib/ftl/ftl_sb.o 00:02:10.579 CC lib/ftl/ftl_nv_cache.o 00:02:10.838 CC lib/ftl/ftl_l2p_flat.o 00:02:10.838 CC lib/ftl/ftl_band.o 00:02:10.838 CC lib/ftl/ftl_rq.o 00:02:10.838 CC lib/scsi/dev.o 00:02:10.838 CC lib/ftl/ftl_band_ops.o 00:02:10.838 CC lib/ftl/ftl_writer.o 00:02:10.838 CC lib/scsi/lun.o 00:02:10.838 CC lib/scsi/port.o 00:02:10.838 CC lib/ftl/ftl_reloc.o 00:02:10.838 CC lib/ftl/ftl_l2p_cache.o 00:02:10.838 CC lib/scsi/scsi.o 00:02:10.838 CC lib/ftl/ftl_p2l.o 00:02:10.838 CC lib/scsi/scsi_bdev.o 00:02:10.838 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.838 CC lib/scsi/task.o 00:02:10.838 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.838 CC lib/scsi/scsi_pr.o 00:02:10.839 CC lib/scsi/scsi_rpc.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.839 CC lib/nbd/nbd.o 00:02:10.839 CC lib/nbd/nbd_rpc.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.839 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.839 CC lib/ftl/utils/ftl_conf.o 00:02:10.839 CC lib/ftl/utils/ftl_md.o 00:02:10.839 CC lib/ftl/utils/ftl_mempool.o 00:02:10.839 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.839 CC lib/ftl/utils/ftl_property.o 00:02:10.839 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.839 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.839 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.839 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.839 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.839 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.839 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.839 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.839 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.839 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.839 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.839 CC lib/ftl/base/ftl_base_dev.o 00:02:10.839 CC lib/ftl/ftl_trace.o 00:02:10.839 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.405 LIB libspdk_nbd.a 00:02:11.405 SO libspdk_nbd.so.7.0 00:02:11.405 LIB libspdk_scsi.a 00:02:11.405 SYMLINK libspdk_nbd.so 00:02:11.405 SO libspdk_scsi.so.9.0 00:02:11.405 LIB libspdk_ublk.a 00:02:11.405 SYMLINK libspdk_scsi.so 00:02:11.405 SO libspdk_ublk.so.3.0 00:02:11.405 SYMLINK libspdk_ublk.so 00:02:11.663 LIB libspdk_ftl.a 00:02:11.663 CC lib/iscsi/conn.o 00:02:11.663 CC lib/iscsi/iscsi.o 00:02:11.663 CC lib/iscsi/init_grp.o 00:02:11.663 CC lib/iscsi/md5.o 00:02:11.663 CC lib/iscsi/param.o 00:02:11.663 CC lib/vhost/vhost.o 00:02:11.663 CC lib/iscsi/portal_grp.o 00:02:11.663 CC lib/vhost/vhost_rpc.o 00:02:11.663 CC lib/iscsi/tgt_node.o 00:02:11.663 CC lib/vhost/vhost_scsi.o 00:02:11.663 CC lib/iscsi/iscsi_subsystem.o 00:02:11.663 CC lib/vhost/vhost_blk.o 00:02:11.663 CC lib/vhost/rte_vhost_user.o 00:02:11.663 CC lib/iscsi/iscsi_rpc.o 00:02:11.663 CC lib/iscsi/task.o 00:02:11.922 SO libspdk_ftl.so.9.0 00:02:12.181 SYMLINK libspdk_ftl.so 00:02:12.440 LIB libspdk_nvmf.a 00:02:12.440 SO libspdk_nvmf.so.19.0 00:02:12.440 LIB libspdk_vhost.a 00:02:12.700 SO libspdk_vhost.so.8.0 00:02:12.700 SYMLINK libspdk_nvmf.so 00:02:12.700 SYMLINK libspdk_vhost.so 00:02:12.700 LIB libspdk_iscsi.a 00:02:12.700 SO libspdk_iscsi.so.8.0 00:02:12.960 SYMLINK libspdk_iscsi.so 00:02:13.529 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.529 CC module/vfu_device/vfu_virtio.o 00:02:13.529 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.529 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.529 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.529 LIB libspdk_env_dpdk_rpc.a 00:02:13.529 CC module/keyring/file/keyring_rpc.o 00:02:13.529 CC module/keyring/file/keyring.o 00:02:13.529 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.529 CC module/accel/dsa/accel_dsa.o 00:02:13.529 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.529 CC module/accel/ioat/accel_ioat.o 00:02:13.529 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.529 CC module/blob/bdev/blob_bdev.o 00:02:13.529 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.529 CC module/accel/iaa/accel_iaa.o 00:02:13.529 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.529 CC module/accel/error/accel_error.o 00:02:13.530 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.530 CC module/accel/error/accel_error_rpc.o 00:02:13.530 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.530 CC module/sock/posix/posix.o 00:02:13.530 CC module/keyring/linux/keyring.o 00:02:13.530 CC module/keyring/linux/keyring_rpc.o 00:02:13.530 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.530 LIB libspdk_keyring_file.a 00:02:13.788 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.788 SO libspdk_keyring_file.so.1.0 00:02:13.788 LIB libspdk_scheduler_gscheduler.a 00:02:13.788 LIB libspdk_keyring_linux.a 00:02:13.788 LIB libspdk_accel_ioat.a 00:02:13.788 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:13.788 LIB libspdk_accel_error.a 00:02:13.788 SO libspdk_scheduler_gscheduler.so.4.0 00:02:13.788 LIB libspdk_scheduler_dynamic.a 00:02:13.788 SO libspdk_accel_ioat.so.6.0 00:02:13.788 SYMLINK libspdk_keyring_file.so 00:02:13.788 LIB libspdk_accel_iaa.a 00:02:13.788 SO libspdk_keyring_linux.so.1.0 00:02:13.788 LIB libspdk_accel_dsa.a 00:02:13.788 SO libspdk_accel_error.so.2.0 00:02:13.788 SO libspdk_scheduler_dynamic.so.4.0 00:02:13.788 LIB libspdk_blob_bdev.a 00:02:13.788 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.788 SO libspdk_accel_iaa.so.3.0 00:02:13.788 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.788 SO libspdk_accel_dsa.so.5.0 00:02:13.788 SYMLINK libspdk_accel_ioat.so 00:02:13.788 SO libspdk_blob_bdev.so.11.0 00:02:13.788 SYMLINK libspdk_scheduler_dynamic.so 00:02:13.788 SYMLINK libspdk_keyring_linux.so 00:02:13.788 SYMLINK libspdk_accel_error.so 00:02:13.788 SYMLINK libspdk_accel_iaa.so 00:02:13.788 SYMLINK libspdk_blob_bdev.so 00:02:13.788 SYMLINK libspdk_accel_dsa.so 00:02:13.788 LIB libspdk_vfu_device.a 00:02:14.049 SO libspdk_vfu_device.so.3.0 00:02:14.049 SYMLINK libspdk_vfu_device.so 00:02:14.049 LIB libspdk_sock_posix.a 00:02:14.049 SO libspdk_sock_posix.so.6.0 00:02:14.307 SYMLINK libspdk_sock_posix.so 00:02:14.307 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.307 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.307 CC module/bdev/gpt/gpt.o 00:02:14.307 CC module/bdev/malloc/bdev_malloc.o 00:02:14.307 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.307 CC module/bdev/null/bdev_null.o 00:02:14.307 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.307 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.307 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.307 CC module/bdev/null/bdev_null_rpc.o 00:02:14.307 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.307 CC module/bdev/aio/bdev_aio.o 00:02:14.307 CC module/bdev/nvme/bdev_nvme.o 00:02:14.307 CC module/bdev/split/vbdev_split.o 00:02:14.307 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.307 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.307 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.307 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.307 CC module/bdev/nvme/nvme_rpc.o 00:02:14.307 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.307 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.307 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.307 CC module/bdev/raid/bdev_raid.o 00:02:14.307 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.307 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.307 CC module/bdev/nvme/vbdev_opal.o 00:02:14.307 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.307 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.307 CC module/bdev/raid/raid0.o 00:02:14.307 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.307 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.307 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.307 CC module/bdev/raid/raid1.o 00:02:14.307 CC module/bdev/delay/vbdev_delay.o 00:02:14.307 CC module/bdev/raid/concat.o 00:02:14.307 CC module/bdev/ftl/bdev_ftl.o 00:02:14.307 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.307 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.307 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.307 CC module/bdev/error/vbdev_error.o 00:02:14.307 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.307 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.566 LIB libspdk_bdev_split.a 00:02:14.566 LIB libspdk_blobfs_bdev.a 00:02:14.566 SO libspdk_bdev_split.so.6.0 00:02:14.566 LIB libspdk_bdev_null.a 00:02:14.566 LIB libspdk_bdev_gpt.a 00:02:14.566 SO libspdk_blobfs_bdev.so.6.0 00:02:14.566 SO libspdk_bdev_null.so.6.0 00:02:14.566 SO libspdk_bdev_gpt.so.6.0 00:02:14.566 SYMLINK libspdk_bdev_split.so 00:02:14.566 LIB libspdk_bdev_error.a 00:02:14.566 LIB libspdk_bdev_malloc.a 00:02:14.566 LIB libspdk_bdev_ftl.a 00:02:14.566 LIB libspdk_bdev_zone_block.a 00:02:14.566 LIB libspdk_bdev_aio.a 00:02:14.566 LIB libspdk_bdev_passthru.a 00:02:14.566 SYMLINK libspdk_blobfs_bdev.so 00:02:14.566 SO libspdk_bdev_error.so.6.0 00:02:14.566 SYMLINK libspdk_bdev_null.so 00:02:14.566 SO libspdk_bdev_zone_block.so.6.0 00:02:14.566 SO libspdk_bdev_malloc.so.6.0 00:02:14.566 SO libspdk_bdev_ftl.so.6.0 00:02:14.566 SO libspdk_bdev_passthru.so.6.0 00:02:14.566 SYMLINK libspdk_bdev_gpt.so 00:02:14.566 SO libspdk_bdev_aio.so.6.0 00:02:14.566 LIB libspdk_bdev_iscsi.a 00:02:14.566 LIB libspdk_bdev_delay.a 00:02:14.827 SO libspdk_bdev_iscsi.so.6.0 00:02:14.827 SYMLINK libspdk_bdev_error.so 00:02:14.827 SYMLINK libspdk_bdev_zone_block.so 00:02:14.827 SYMLINK libspdk_bdev_malloc.so 00:02:14.827 SYMLINK libspdk_bdev_ftl.so 00:02:14.827 SO libspdk_bdev_delay.so.6.0 00:02:14.827 SYMLINK libspdk_bdev_passthru.so 00:02:14.827 SYMLINK libspdk_bdev_aio.so 00:02:14.827 LIB libspdk_bdev_lvol.a 00:02:14.827 SO libspdk_bdev_lvol.so.6.0 00:02:14.827 LIB libspdk_bdev_virtio.a 00:02:14.827 SYMLINK libspdk_bdev_iscsi.so 00:02:14.827 SYMLINK libspdk_bdev_delay.so 00:02:14.827 SO libspdk_bdev_virtio.so.6.0 00:02:14.827 SYMLINK libspdk_bdev_lvol.so 00:02:14.827 SYMLINK libspdk_bdev_virtio.so 00:02:15.087 LIB libspdk_bdev_raid.a 00:02:15.087 SO libspdk_bdev_raid.so.6.0 00:02:15.349 SYMLINK libspdk_bdev_raid.so 00:02:15.976 LIB libspdk_bdev_nvme.a 00:02:15.976 SO libspdk_bdev_nvme.so.7.0 00:02:15.976 SYMLINK libspdk_bdev_nvme.so 00:02:16.546 CC module/event/subsystems/sock/sock.o 00:02:16.546 CC module/event/subsystems/vmd/vmd.o 00:02:16.546 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.546 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.546 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.546 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:16.546 CC module/event/subsystems/keyring/keyring.o 00:02:16.546 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.546 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.806 LIB libspdk_event_sock.a 00:02:16.806 LIB libspdk_event_iobuf.a 00:02:16.806 LIB libspdk_event_vmd.a 00:02:16.806 SO libspdk_event_sock.so.5.0 00:02:16.806 LIB libspdk_event_keyring.a 00:02:16.806 LIB libspdk_event_vfu_tgt.a 00:02:16.806 LIB libspdk_event_vhost_blk.a 00:02:16.806 LIB libspdk_event_scheduler.a 00:02:16.806 SO libspdk_event_iobuf.so.3.0 00:02:16.806 SO libspdk_event_vmd.so.6.0 00:02:16.806 SYMLINK libspdk_event_sock.so 00:02:16.806 SO libspdk_event_keyring.so.1.0 00:02:16.806 SO libspdk_event_vfu_tgt.so.3.0 00:02:16.806 SO libspdk_event_vhost_blk.so.3.0 00:02:16.806 SO libspdk_event_scheduler.so.4.0 00:02:16.806 SYMLINK libspdk_event_keyring.so 00:02:16.806 SYMLINK libspdk_event_iobuf.so 00:02:16.806 SYMLINK libspdk_event_vmd.so 00:02:16.806 SYMLINK libspdk_event_vhost_blk.so 00:02:16.807 SYMLINK libspdk_event_vfu_tgt.so 00:02:16.807 SYMLINK libspdk_event_scheduler.so 00:02:17.067 CC module/event/subsystems/accel/accel.o 00:02:17.328 LIB libspdk_event_accel.a 00:02:17.328 SO libspdk_event_accel.so.6.0 00:02:17.328 SYMLINK libspdk_event_accel.so 00:02:17.589 CC module/event/subsystems/bdev/bdev.o 00:02:17.849 LIB libspdk_event_bdev.a 00:02:17.849 SO libspdk_event_bdev.so.6.0 00:02:17.849 SYMLINK libspdk_event_bdev.so 00:02:18.110 CC module/event/subsystems/nbd/nbd.o 00:02:18.368 CC module/event/subsystems/ublk/ublk.o 00:02:18.368 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.368 CC module/event/subsystems/scsi/scsi.o 00:02:18.368 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.368 LIB libspdk_event_nbd.a 00:02:18.368 SO libspdk_event_nbd.so.6.0 00:02:18.368 LIB libspdk_event_ublk.a 00:02:18.368 LIB libspdk_event_scsi.a 00:02:18.368 SO libspdk_event_ublk.so.3.0 00:02:18.368 SYMLINK libspdk_event_nbd.so 00:02:18.368 SO libspdk_event_scsi.so.6.0 00:02:18.368 SYMLINK libspdk_event_ublk.so 00:02:18.368 LIB libspdk_event_nvmf.a 00:02:18.629 SYMLINK libspdk_event_scsi.so 00:02:18.629 SO libspdk_event_nvmf.so.6.0 00:02:18.629 SYMLINK libspdk_event_nvmf.so 00:02:18.889 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.889 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.889 LIB libspdk_event_vhost_scsi.a 00:02:18.889 SO libspdk_event_vhost_scsi.so.3.0 00:02:18.889 LIB libspdk_event_iscsi.a 00:02:18.889 SYMLINK libspdk_event_vhost_scsi.so 00:02:18.889 SO libspdk_event_iscsi.so.6.0 00:02:19.149 SYMLINK libspdk_event_iscsi.so 00:02:19.149 SO libspdk.so.6.0 00:02:19.149 SYMLINK libspdk.so 00:02:19.408 CXX app/trace/trace.o 00:02:19.677 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.677 CC app/spdk_nvme_perf/perf.o 00:02:19.677 CC app/spdk_lspci/spdk_lspci.o 00:02:19.677 CC app/spdk_top/spdk_top.o 00:02:19.677 CC app/trace_record/trace_record.o 00:02:19.677 CC test/rpc_client/rpc_client_test.o 00:02:19.677 CC app/spdk_nvme_identify/identify.o 00:02:19.677 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.677 TEST_HEADER include/spdk/accel.h 00:02:19.677 TEST_HEADER include/spdk/assert.h 00:02:19.677 TEST_HEADER include/spdk/accel_module.h 00:02:19.677 TEST_HEADER include/spdk/base64.h 00:02:19.677 TEST_HEADER include/spdk/bdev.h 00:02:19.677 TEST_HEADER include/spdk/barrier.h 00:02:19.677 TEST_HEADER include/spdk/bdev_module.h 00:02:19.677 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.677 TEST_HEADER include/spdk/bit_pool.h 00:02:19.677 TEST_HEADER include/spdk/bit_array.h 00:02:19.677 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.677 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.677 TEST_HEADER include/spdk/blobfs.h 00:02:19.677 TEST_HEADER include/spdk/blob.h 00:02:19.677 TEST_HEADER include/spdk/config.h 00:02:19.677 TEST_HEADER include/spdk/conf.h 00:02:19.677 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.677 TEST_HEADER include/spdk/crc16.h 00:02:19.677 TEST_HEADER include/spdk/cpuset.h 00:02:19.677 TEST_HEADER include/spdk/crc32.h 00:02:19.677 TEST_HEADER include/spdk/crc64.h 00:02:19.677 TEST_HEADER include/spdk/dif.h 00:02:19.677 TEST_HEADER include/spdk/dma.h 00:02:19.677 TEST_HEADER include/spdk/endian.h 00:02:19.677 TEST_HEADER include/spdk/env.h 00:02:19.677 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.677 TEST_HEADER include/spdk/fd_group.h 00:02:19.677 TEST_HEADER include/spdk/event.h 00:02:19.677 TEST_HEADER include/spdk/file.h 00:02:19.677 CC app/spdk_dd/spdk_dd.o 00:02:19.677 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.677 TEST_HEADER include/spdk/fd.h 00:02:19.677 TEST_HEADER include/spdk/ftl.h 00:02:19.677 TEST_HEADER include/spdk/histogram_data.h 00:02:19.677 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.677 TEST_HEADER include/spdk/hexlify.h 00:02:19.677 TEST_HEADER include/spdk/idxd.h 00:02:19.677 TEST_HEADER include/spdk/ioat.h 00:02:19.677 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.677 TEST_HEADER include/spdk/init.h 00:02:19.677 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.677 TEST_HEADER include/spdk/json.h 00:02:19.677 TEST_HEADER include/spdk/keyring.h 00:02:19.677 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.677 TEST_HEADER include/spdk/keyring_module.h 00:02:19.677 TEST_HEADER include/spdk/log.h 00:02:19.677 TEST_HEADER include/spdk/likely.h 00:02:19.677 TEST_HEADER include/spdk/lvol.h 00:02:19.677 TEST_HEADER include/spdk/memory.h 00:02:19.677 TEST_HEADER include/spdk/net.h 00:02:19.677 TEST_HEADER include/spdk/mmio.h 00:02:19.677 TEST_HEADER include/spdk/nbd.h 00:02:19.677 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.677 TEST_HEADER include/spdk/nvme.h 00:02:19.677 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.677 TEST_HEADER include/spdk/notify.h 00:02:19.677 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.677 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.677 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.677 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.677 TEST_HEADER include/spdk/nvmf.h 00:02:19.677 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.677 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.677 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.677 TEST_HEADER include/spdk/opal_spec.h 00:02:19.677 TEST_HEADER include/spdk/opal.h 00:02:19.677 TEST_HEADER include/spdk/pci_ids.h 00:02:19.677 TEST_HEADER include/spdk/pipe.h 00:02:19.677 TEST_HEADER include/spdk/queue.h 00:02:19.677 TEST_HEADER include/spdk/rpc.h 00:02:19.677 TEST_HEADER include/spdk/scheduler.h 00:02:19.677 TEST_HEADER include/spdk/scsi.h 00:02:19.677 TEST_HEADER include/spdk/reduce.h 00:02:19.677 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.677 TEST_HEADER include/spdk/sock.h 00:02:19.677 TEST_HEADER include/spdk/string.h 00:02:19.677 TEST_HEADER include/spdk/stdinc.h 00:02:19.677 TEST_HEADER include/spdk/thread.h 00:02:19.677 TEST_HEADER include/spdk/trace.h 00:02:19.677 TEST_HEADER include/spdk/trace_parser.h 00:02:19.677 TEST_HEADER include/spdk/tree.h 00:02:19.677 TEST_HEADER include/spdk/ublk.h 00:02:19.677 TEST_HEADER include/spdk/version.h 00:02:19.677 TEST_HEADER include/spdk/uuid.h 00:02:19.677 TEST_HEADER include/spdk/util.h 00:02:19.677 CC app/spdk_tgt/spdk_tgt.o 00:02:19.677 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.677 TEST_HEADER include/spdk/vhost.h 00:02:19.677 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.677 TEST_HEADER include/spdk/vmd.h 00:02:19.677 CC app/nvmf_tgt/nvmf_main.o 00:02:19.677 TEST_HEADER include/spdk/xor.h 00:02:19.677 CXX test/cpp_headers/accel.o 00:02:19.677 CXX test/cpp_headers/accel_module.o 00:02:19.677 TEST_HEADER include/spdk/zipf.h 00:02:19.677 CXX test/cpp_headers/assert.o 00:02:19.677 CXX test/cpp_headers/barrier.o 00:02:19.677 CXX test/cpp_headers/base64.o 00:02:19.677 CXX test/cpp_headers/bdev.o 00:02:19.677 CXX test/cpp_headers/bdev_module.o 00:02:19.677 CXX test/cpp_headers/blob_bdev.o 00:02:19.677 CXX test/cpp_headers/bdev_zone.o 00:02:19.677 CXX test/cpp_headers/bit_pool.o 00:02:19.677 CXX test/cpp_headers/bit_array.o 00:02:19.677 CXX test/cpp_headers/blobfs.o 00:02:19.677 CXX test/cpp_headers/blobfs_bdev.o 00:02:19.677 CXX test/cpp_headers/blob.o 00:02:19.677 CXX test/cpp_headers/conf.o 00:02:19.677 CXX test/cpp_headers/cpuset.o 00:02:19.677 CXX test/cpp_headers/config.o 00:02:19.677 CXX test/cpp_headers/crc16.o 00:02:19.677 CXX test/cpp_headers/crc64.o 00:02:19.677 CXX test/cpp_headers/crc32.o 00:02:19.677 CXX test/cpp_headers/dif.o 00:02:19.677 CXX test/cpp_headers/dma.o 00:02:19.677 CXX test/cpp_headers/endian.o 00:02:19.677 CXX test/cpp_headers/env.o 00:02:19.677 CC examples/util/zipf/zipf.o 00:02:19.677 CXX test/cpp_headers/env_dpdk.o 00:02:19.677 CXX test/cpp_headers/event.o 00:02:19.677 CXX test/cpp_headers/fd_group.o 00:02:19.677 CXX test/cpp_headers/file.o 00:02:19.677 CXX test/cpp_headers/fd.o 00:02:19.677 CXX test/cpp_headers/hexlify.o 00:02:19.677 CXX test/cpp_headers/ftl.o 00:02:19.677 CXX test/cpp_headers/histogram_data.o 00:02:19.677 CXX test/cpp_headers/idxd.o 00:02:19.677 CXX test/cpp_headers/gpt_spec.o 00:02:19.677 CXX test/cpp_headers/idxd_spec.o 00:02:19.677 CXX test/cpp_headers/init.o 00:02:19.677 CC examples/ioat/perf/perf.o 00:02:19.677 CXX test/cpp_headers/ioat_spec.o 00:02:19.677 CXX test/cpp_headers/iscsi_spec.o 00:02:19.677 CXX test/cpp_headers/ioat.o 00:02:19.677 CXX test/cpp_headers/json.o 00:02:19.677 CXX test/cpp_headers/jsonrpc.o 00:02:19.677 CXX test/cpp_headers/likely.o 00:02:19.678 CC examples/ioat/verify/verify.o 00:02:19.678 CXX test/cpp_headers/keyring_module.o 00:02:19.678 CXX test/cpp_headers/lvol.o 00:02:19.678 CXX test/cpp_headers/keyring.o 00:02:19.678 CXX test/cpp_headers/log.o 00:02:19.678 CXX test/cpp_headers/mmio.o 00:02:19.678 CXX test/cpp_headers/net.o 00:02:19.678 CXX test/cpp_headers/memory.o 00:02:19.678 CXX test/cpp_headers/nbd.o 00:02:19.678 CXX test/cpp_headers/nvme.o 00:02:19.678 CXX test/cpp_headers/notify.o 00:02:19.678 CXX test/cpp_headers/nvme_intel.o 00:02:19.678 CXX test/cpp_headers/nvme_ocssd.o 00:02:19.678 CXX test/cpp_headers/nvme_spec.o 00:02:19.678 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:19.678 CXX test/cpp_headers/nvme_zns.o 00:02:19.678 CXX test/cpp_headers/nvmf_cmd.o 00:02:19.678 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:19.678 CXX test/cpp_headers/nvmf_spec.o 00:02:19.678 CXX test/cpp_headers/nvmf.o 00:02:19.678 CXX test/cpp_headers/nvmf_transport.o 00:02:19.678 CXX test/cpp_headers/opal.o 00:02:19.678 CXX test/cpp_headers/opal_spec.o 00:02:19.678 CXX test/cpp_headers/pci_ids.o 00:02:19.678 CXX test/cpp_headers/pipe.o 00:02:19.678 CC app/fio/nvme/fio_plugin.o 00:02:19.678 CC test/env/vtophys/vtophys.o 00:02:19.678 CC test/app/stub/stub.o 00:02:19.678 CC test/app/jsoncat/jsoncat.o 00:02:19.678 CC test/env/pci/pci_ut.o 00:02:19.678 CXX test/cpp_headers/queue.o 00:02:19.678 CC test/app/histogram_perf/histogram_perf.o 00:02:19.678 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.678 CC test/thread/poller_perf/poller_perf.o 00:02:19.678 CC test/env/memory/memory_ut.o 00:02:19.678 CXX test/cpp_headers/reduce.o 00:02:19.956 CC app/fio/bdev/fio_plugin.o 00:02:19.956 CC test/app/bdev_svc/bdev_svc.o 00:02:19.956 CC test/dma/test_dma/test_dma.o 00:02:19.956 LINK rpc_client_test 00:02:19.956 LINK interrupt_tgt 00:02:19.956 LINK spdk_lspci 00:02:20.223 LINK iscsi_tgt 00:02:20.223 LINK spdk_nvme_discover 00:02:20.223 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.223 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.223 LINK spdk_trace_record 00:02:20.223 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.223 LINK stub 00:02:20.223 CXX test/cpp_headers/rpc.o 00:02:20.223 CXX test/cpp_headers/scheduler.o 00:02:20.223 CXX test/cpp_headers/scsi.o 00:02:20.223 CXX test/cpp_headers/scsi_spec.o 00:02:20.223 CXX test/cpp_headers/sock.o 00:02:20.223 CXX test/cpp_headers/stdinc.o 00:02:20.223 LINK zipf 00:02:20.223 CXX test/cpp_headers/string.o 00:02:20.223 LINK verify 00:02:20.223 CXX test/cpp_headers/thread.o 00:02:20.223 CXX test/cpp_headers/trace.o 00:02:20.223 CXX test/cpp_headers/trace_parser.o 00:02:20.223 CXX test/cpp_headers/tree.o 00:02:20.223 LINK jsoncat 00:02:20.223 CXX test/cpp_headers/util.o 00:02:20.223 CXX test/cpp_headers/ublk.o 00:02:20.223 CXX test/cpp_headers/uuid.o 00:02:20.223 CXX test/cpp_headers/version.o 00:02:20.223 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.223 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.223 CXX test/cpp_headers/vhost.o 00:02:20.223 CXX test/cpp_headers/vmd.o 00:02:20.223 LINK vtophys 00:02:20.223 CXX test/cpp_headers/xor.o 00:02:20.223 CXX test/cpp_headers/zipf.o 00:02:20.223 LINK nvmf_tgt 00:02:20.223 LINK histogram_perf 00:02:20.223 LINK bdev_svc 00:02:20.223 LINK poller_perf 00:02:20.223 LINK env_dpdk_post_init 00:02:20.482 LINK spdk_tgt 00:02:20.482 LINK ioat_perf 00:02:20.482 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.482 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.482 LINK spdk_dd 00:02:20.482 LINK pci_ut 00:02:20.482 LINK spdk_trace 00:02:20.482 LINK test_dma 00:02:20.740 LINK nvme_fuzz 00:02:20.740 LINK spdk_nvme 00:02:20.740 CC examples/sock/hello_world/hello_sock.o 00:02:20.740 LINK spdk_bdev 00:02:20.740 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.740 CC examples/idxd/perf/perf.o 00:02:20.740 CC examples/vmd/led/led.o 00:02:20.740 CC test/event/event_perf/event_perf.o 00:02:20.740 CC test/event/reactor_perf/reactor_perf.o 00:02:20.740 CC examples/thread/thread/thread_ex.o 00:02:20.740 CC test/event/reactor/reactor.o 00:02:20.740 CC test/event/app_repeat/app_repeat.o 00:02:20.740 CC test/event/scheduler/scheduler.o 00:02:20.740 LINK spdk_nvme_identify 00:02:20.740 LINK vhost_fuzz 00:02:20.740 LINK spdk_nvme_perf 00:02:20.740 LINK spdk_top 00:02:20.740 LINK mem_callbacks 00:02:20.999 LINK lsvmd 00:02:20.999 LINK led 00:02:20.999 LINK event_perf 00:02:20.999 LINK reactor_perf 00:02:20.999 CC app/vhost/vhost.o 00:02:20.999 LINK reactor 00:02:20.999 LINK hello_sock 00:02:20.999 LINK app_repeat 00:02:20.999 LINK scheduler 00:02:20.999 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.999 CC test/nvme/cuse/cuse.o 00:02:20.999 CC test/nvme/sgl/sgl.o 00:02:20.999 CC test/nvme/reset/reset.o 00:02:20.999 LINK thread 00:02:20.999 CC test/nvme/e2edp/nvme_dp.o 00:02:20.999 CC test/nvme/simple_copy/simple_copy.o 00:02:20.999 CC test/nvme/err_injection/err_injection.o 00:02:20.999 CC test/nvme/connect_stress/connect_stress.o 00:02:20.999 CC test/nvme/reserve/reserve.o 00:02:20.999 CC test/nvme/compliance/nvme_compliance.o 00:02:20.999 CC test/nvme/startup/startup.o 00:02:20.999 CC test/nvme/fdp/fdp.o 00:02:20.999 CC test/nvme/overhead/overhead.o 00:02:20.999 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.999 CC test/nvme/boot_partition/boot_partition.o 00:02:20.999 CC test/blobfs/mkfs/mkfs.o 00:02:20.999 CC test/nvme/aer/aer.o 00:02:20.999 LINK idxd_perf 00:02:20.999 CC test/accel/dif/dif.o 00:02:21.258 LINK vhost 00:02:21.258 CC test/lvol/esnap/esnap.o 00:02:21.258 LINK memory_ut 00:02:21.258 LINK fused_ordering 00:02:21.258 LINK connect_stress 00:02:21.258 LINK startup 00:02:21.258 LINK doorbell_aers 00:02:21.258 LINK err_injection 00:02:21.258 LINK boot_partition 00:02:21.258 LINK reserve 00:02:21.258 LINK simple_copy 00:02:21.258 LINK mkfs 00:02:21.258 LINK sgl 00:02:21.258 LINK nvme_dp 00:02:21.258 LINK reset 00:02:21.258 LINK aer 00:02:21.258 LINK overhead 00:02:21.258 LINK nvme_compliance 00:02:21.258 LINK fdp 00:02:21.258 CC examples/nvme/arbitration/arbitration.o 00:02:21.258 CC examples/nvme/abort/abort.o 00:02:21.258 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.258 CC examples/nvme/hello_world/hello_world.o 00:02:21.258 CC examples/nvme/hotplug/hotplug.o 00:02:21.258 CC examples/nvme/reconnect/reconnect.o 00:02:21.258 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.517 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.517 LINK dif 00:02:21.517 CC examples/accel/perf/accel_perf.o 00:02:21.517 CC examples/blob/cli/blobcli.o 00:02:21.517 CC examples/blob/hello_world/hello_blob.o 00:02:21.517 LINK cmb_copy 00:02:21.517 LINK pmr_persistence 00:02:21.517 LINK hello_world 00:02:21.517 LINK hotplug 00:02:21.775 LINK iscsi_fuzz 00:02:21.775 LINK arbitration 00:02:21.775 LINK reconnect 00:02:21.775 LINK abort 00:02:21.775 LINK nvme_manage 00:02:21.775 LINK hello_blob 00:02:21.775 LINK accel_perf 00:02:22.034 LINK blobcli 00:02:22.034 CC test/bdev/bdevio/bdevio.o 00:02:22.034 LINK cuse 00:02:22.293 LINK bdevio 00:02:22.293 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.293 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.551 LINK hello_bdev 00:02:22.809 LINK bdevperf 00:02:23.374 CC examples/nvmf/nvmf/nvmf.o 00:02:23.633 LINK nvmf 00:02:24.567 LINK esnap 00:02:24.825 00:02:24.825 real 0m44.194s 00:02:24.825 user 6m29.990s 00:02:24.825 sys 3m25.488s 00:02:24.825 13:43:52 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.825 13:43:52 make -- common/autotest_common.sh@10 -- $ set +x 00:02:24.825 ************************************ 00:02:24.825 END TEST make 00:02:24.825 ************************************ 00:02:24.825 13:43:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:24.825 13:43:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:24.825 13:43:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:24.825 13:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.825 13:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:24.825 13:43:52 -- pm/common@44 -- $ pid=2674359 00:02:24.825 13:43:52 -- pm/common@50 -- $ kill -TERM 2674359 00:02:24.825 13:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.825 13:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:24.825 13:43:52 -- pm/common@44 -- $ pid=2674360 00:02:24.825 13:43:52 -- pm/common@50 -- $ kill -TERM 2674360 00:02:24.825 13:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.825 13:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:24.825 13:43:52 -- pm/common@44 -- $ pid=2674362 00:02:24.825 13:43:52 -- pm/common@50 -- $ kill -TERM 2674362 00:02:24.825 13:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.825 13:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:24.825 13:43:52 -- pm/common@44 -- $ pid=2674391 00:02:24.825 13:43:52 -- pm/common@50 -- $ sudo -E kill -TERM 2674391 00:02:25.084 13:43:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.084 13:43:52 -- nvmf/common.sh@7 -- # uname -s 00:02:25.084 13:43:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.084 13:43:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.084 13:43:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.084 13:43:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.084 13:43:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.084 13:43:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.084 13:43:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.084 13:43:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.084 13:43:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.084 13:43:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.084 13:43:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.084 13:43:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.084 13:43:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.084 13:43:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.084 13:43:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.084 13:43:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.084 13:43:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.084 13:43:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.084 13:43:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.084 13:43:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.084 13:43:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.084 13:43:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.084 13:43:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.084 13:43:52 -- paths/export.sh@5 -- # export PATH 00:02:25.084 13:43:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.084 13:43:52 -- nvmf/common.sh@47 -- # : 0 00:02:25.084 13:43:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:25.084 13:43:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:25.084 13:43:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.084 13:43:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.084 13:43:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.084 13:43:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:25.084 13:43:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:25.084 13:43:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:25.084 13:43:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.084 13:43:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.084 13:43:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.084 13:43:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.084 13:43:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.084 13:43:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.084 13:43:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.084 13:43:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.084 13:43:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.084 13:43:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.084 13:43:52 -- spdk/autotest.sh@48 -- # udevadm_pid=2733370 00:02:25.084 13:43:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.084 13:43:52 -- pm/common@17 -- # local monitor 00:02:25.084 13:43:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.084 13:43:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.084 13:43:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.084 13:43:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.084 13:43:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.084 13:43:52 -- pm/common@21 -- # date +%s 00:02:25.084 13:43:52 -- pm/common@21 -- # date +%s 00:02:25.084 13:43:52 -- pm/common@25 -- # sleep 1 00:02:25.084 13:43:52 -- pm/common@21 -- # date +%s 00:02:25.084 13:43:52 -- pm/common@21 -- # date +%s 00:02:25.084 13:43:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721994232 00:02:25.084 13:43:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721994232 00:02:25.084 13:43:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721994232 00:02:25.084 13:43:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721994232 00:02:25.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721994232_collect-vmstat.pm.log 00:02:25.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721994232_collect-cpu-load.pm.log 00:02:25.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721994232_collect-cpu-temp.pm.log 00:02:25.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721994232_collect-bmc-pm.bmc.pm.log 00:02:26.023 13:43:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.023 13:43:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.023 13:43:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:26.023 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:02:26.023 13:43:53 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.023 13:43:53 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:26.023 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:02:26.023 13:43:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:26.023 13:43:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.023 13:43:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.023 13:43:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.023 13:43:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.023 13:43:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.023 13:43:53 -- common/autotest_common.sh@1455 -- # uname 00:02:26.023 13:43:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:26.023 13:43:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.023 13:43:53 -- common/autotest_common.sh@1475 -- # uname 00:02:26.023 13:43:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:26.023 13:43:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:26.023 13:43:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.023 13:43:53 -- spdk/autotest.sh@72 -- # hash lcov 00:02:26.023 13:43:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.023 13:43:53 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:26.023 --rc lcov_branch_coverage=1 00:02:26.023 --rc lcov_function_coverage=1 00:02:26.023 --rc genhtml_branch_coverage=1 00:02:26.023 --rc genhtml_function_coverage=1 00:02:26.023 --rc genhtml_legend=1 00:02:26.023 --rc geninfo_all_blocks=1 00:02:26.023 ' 00:02:26.023 13:43:53 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:26.023 --rc lcov_branch_coverage=1 00:02:26.023 --rc lcov_function_coverage=1 00:02:26.023 --rc genhtml_branch_coverage=1 00:02:26.023 --rc genhtml_function_coverage=1 00:02:26.023 --rc genhtml_legend=1 00:02:26.023 --rc geninfo_all_blocks=1 00:02:26.023 ' 00:02:26.023 13:43:53 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:26.023 --rc lcov_branch_coverage=1 00:02:26.023 --rc lcov_function_coverage=1 00:02:26.023 --rc genhtml_branch_coverage=1 00:02:26.023 --rc genhtml_function_coverage=1 00:02:26.023 --rc genhtml_legend=1 00:02:26.023 --rc geninfo_all_blocks=1 00:02:26.023 --no-external' 00:02:26.023 13:43:53 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:26.023 --rc lcov_branch_coverage=1 00:02:26.023 --rc lcov_function_coverage=1 00:02:26.023 --rc genhtml_branch_coverage=1 00:02:26.023 --rc genhtml_function_coverage=1 00:02:26.023 --rc genhtml_legend=1 00:02:26.023 --rc geninfo_all_blocks=1 00:02:26.023 --no-external' 00:02:26.023 13:43:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.282 lcov: LCOV version 1.14 00:02:26.282 13:43:53 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:38.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:38.489 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:48.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:48.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:48.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:51.076 13:44:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:51.076 13:44:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:51.076 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:02:51.076 13:44:18 -- spdk/autotest.sh@91 -- # rm -f 00:02:51.076 13:44:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.620 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:53.620 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.620 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.881 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.881 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.881 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.881 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.881 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.881 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.881 13:44:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:53.881 13:44:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.881 13:44:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.881 13:44:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.881 13:44:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.881 13:44:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.881 13:44:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.881 13:44:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.881 13:44:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.881 13:44:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:53.881 13:44:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.881 13:44:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:53.881 13:44:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:53.881 13:44:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:53.881 13:44:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.881 No valid GPT data, bailing 00:02:53.881 13:44:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.881 13:44:21 -- scripts/common.sh@391 -- # pt= 00:02:53.881 13:44:21 -- scripts/common.sh@392 -- # return 1 00:02:53.881 13:44:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.881 1+0 records in 00:02:53.881 1+0 records out 00:02:53.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476365 s, 220 MB/s 00:02:53.881 13:44:21 -- spdk/autotest.sh@118 -- # sync 00:02:53.881 13:44:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.881 13:44:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.881 13:44:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:59.166 13:44:25 -- spdk/autotest.sh@124 -- # uname -s 00:02:59.166 13:44:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:59.166 13:44:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.166 13:44:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.166 13:44:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.166 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:02:59.166 ************************************ 00:02:59.166 START TEST setup.sh 00:02:59.166 ************************************ 00:02:59.166 13:44:26 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.166 * Looking for test storage... 00:02:59.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.166 13:44:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:59.166 13:44:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:59.166 13:44:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.166 13:44:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.166 13:44:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.166 13:44:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.166 ************************************ 00:02:59.166 START TEST acl 00:02:59.166 ************************************ 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.166 * Looking for test storage... 00:02:59.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.166 13:44:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.166 13:44:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:59.166 13:44:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:59.166 13:44:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:59.166 13:44:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:59.166 13:44:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:59.166 13:44:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:59.166 13:44:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.166 13:44:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.466 13:44:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:02.466 13:44:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:02.466 13:44:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.466 13:44:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:02.466 13:44:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.466 13:44:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:05.010 Hugepages 00:03:05.010 node hugesize free / total 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 00:03:05.010 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:05.010 13:44:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:05.010 13:44:32 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.010 13:44:32 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.010 13:44:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.010 ************************************ 00:03:05.010 START TEST denied 00:03:05.010 ************************************ 00:03:05.010 13:44:32 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:05.010 13:44:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:05.010 13:44:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:05.010 13:44:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:05.010 13:44:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.010 13:44:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.547 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.547 13:44:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.750 00:03:11.750 real 0m5.992s 00:03:11.750 user 0m1.827s 00:03:11.750 sys 0m3.415s 00:03:11.750 13:44:38 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.750 13:44:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:11.750 ************************************ 00:03:11.750 END TEST denied 00:03:11.750 ************************************ 00:03:11.750 13:44:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:11.750 13:44:38 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:11.750 13:44:38 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:11.750 13:44:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.750 ************************************ 00:03:11.750 START TEST allowed 00:03:11.750 ************************************ 00:03:11.750 13:44:38 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:11.750 13:44:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:11.750 13:44:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:11.750 13:44:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:11.750 13:44:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.750 13:44:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.049 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.049 13:44:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:15.049 13:44:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:15.049 13:44:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:15.049 13:44:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.049 13:44:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.600 00:03:17.600 real 0m6.530s 00:03:17.600 user 0m2.038s 00:03:17.600 sys 0m3.684s 00:03:17.600 13:44:44 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.600 13:44:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:17.600 ************************************ 00:03:17.600 END TEST allowed 00:03:17.600 ************************************ 00:03:17.600 00:03:17.600 real 0m18.777s 00:03:17.600 user 0m6.252s 00:03:17.600 sys 0m11.187s 00:03:17.600 13:44:44 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.600 13:44:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.600 ************************************ 00:03:17.600 END TEST acl 00:03:17.600 ************************************ 00:03:17.600 13:44:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.600 13:44:44 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.600 13:44:44 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.600 13:44:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:17.600 ************************************ 00:03:17.600 START TEST hugepages 00:03:17.600 ************************************ 00:03:17.600 13:44:44 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.912 * Looking for test storage... 00:03:17.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168299860 kB' 'MemAvailable: 171536548 kB' 'Buffers: 3896 kB' 'Cached: 14729508 kB' 'SwapCached: 0 kB' 'Active: 11595992 kB' 'Inactive: 3694312 kB' 'Active(anon): 11178036 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560244 kB' 'Mapped: 214908 kB' 'Shmem: 10621136 kB' 'KReclaimable: 537268 kB' 'Slab: 1197232 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 659964 kB' 'KernelStack: 20640 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12722540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.912 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:17.913 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:17.914 13:44:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:17.914 13:44:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.914 13:44:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.914 13:44:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.914 ************************************ 00:03:17.914 START TEST default_setup 00:03:17.914 ************************************ 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.914 13:44:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.456 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.456 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.399 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170450452 kB' 'MemAvailable: 173687140 kB' 'Buffers: 3896 kB' 'Cached: 14729608 kB' 'SwapCached: 0 kB' 'Active: 11615404 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197448 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579116 kB' 'Mapped: 215028 kB' 'Shmem: 10621236 kB' 'KReclaimable: 537268 kB' 'Slab: 1196176 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658908 kB' 'KernelStack: 20752 kB' 'PageTables: 9492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12750804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.399 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.400 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170448520 kB' 'MemAvailable: 173685208 kB' 'Buffers: 3896 kB' 'Cached: 14729612 kB' 'SwapCached: 0 kB' 'Active: 11614952 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196996 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578536 kB' 'Mapped: 214976 kB' 'Shmem: 10621240 kB' 'KReclaimable: 537268 kB' 'Slab: 1196172 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658904 kB' 'KernelStack: 20832 kB' 'PageTables: 9544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12750824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.401 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.402 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170445880 kB' 'MemAvailable: 173682568 kB' 'Buffers: 3896 kB' 'Cached: 14729612 kB' 'SwapCached: 0 kB' 'Active: 11614540 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196584 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578604 kB' 'Mapped: 214896 kB' 'Shmem: 10621240 kB' 'KReclaimable: 537268 kB' 'Slab: 1196180 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658912 kB' 'KernelStack: 20832 kB' 'PageTables: 9656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12750844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317384 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.669 nr_hugepages=1024 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.669 resv_hugepages=0 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.669 surplus_hugepages=0 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.669 anon_hugepages=0 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170441348 kB' 'MemAvailable: 173678036 kB' 'Buffers: 3896 kB' 'Cached: 14729652 kB' 'SwapCached: 0 kB' 'Active: 11614496 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196540 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578516 kB' 'Mapped: 214896 kB' 'Shmem: 10621280 kB' 'KReclaimable: 537268 kB' 'Slab: 1196180 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658912 kB' 'KernelStack: 20896 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12750868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317384 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92059628 kB' 'MemUsed: 5556000 kB' 'SwapCached: 0 kB' 'Active: 1867424 kB' 'Inactive: 219240 kB' 'Active(anon): 1705600 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896260 kB' 'Mapped: 78548 kB' 'AnonPages: 193524 kB' 'Shmem: 1515196 kB' 'KernelStack: 10776 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349228 kB' 'Slab: 654252 kB' 'SReclaimable: 349228 kB' 'SUnreclaim: 305024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.672 node0=1024 expecting 1024 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.672 00:03:21.672 real 0m3.763s 00:03:21.672 user 0m1.160s 00:03:21.672 sys 0m1.770s 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.672 13:44:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:21.672 ************************************ 00:03:21.672 END TEST default_setup 00:03:21.672 ************************************ 00:03:21.672 13:44:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:21.672 13:44:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.672 13:44:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.672 13:44:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.672 ************************************ 00:03:21.672 START TEST per_node_1G_alloc 00:03:21.672 ************************************ 00:03:21.672 13:44:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:21.672 13:44:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:21.672 13:44:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:21.672 13:44:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.672 13:44:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:21.672 13:44:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:21.672 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.673 13:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.220 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.220 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.220 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170451312 kB' 'MemAvailable: 173688000 kB' 'Buffers: 3896 kB' 'Cached: 14729744 kB' 'SwapCached: 0 kB' 'Active: 11615504 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197548 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578916 kB' 'Mapped: 215060 kB' 'Shmem: 10621372 kB' 'KReclaimable: 537268 kB' 'Slab: 1196900 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 659632 kB' 'KernelStack: 20752 kB' 'PageTables: 9872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12761676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.220 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.221 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170454296 kB' 'MemAvailable: 173690984 kB' 'Buffers: 3896 kB' 'Cached: 14729748 kB' 'SwapCached: 0 kB' 'Active: 11614088 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196132 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578508 kB' 'Mapped: 214984 kB' 'Shmem: 10621376 kB' 'KReclaimable: 537268 kB' 'Slab: 1196848 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 659580 kB' 'KernelStack: 20848 kB' 'PageTables: 9800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12750984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170453668 kB' 'MemAvailable: 173690356 kB' 'Buffers: 3896 kB' 'Cached: 14729764 kB' 'SwapCached: 0 kB' 'Active: 11614516 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196560 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578884 kB' 'Mapped: 214908 kB' 'Shmem: 10621392 kB' 'KReclaimable: 537268 kB' 'Slab: 1196840 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 659572 kB' 'KernelStack: 20848 kB' 'PageTables: 9532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12751012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.491 nr_hugepages=1024 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.491 resv_hugepages=0 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.491 surplus_hugepages=0 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.491 anon_hugepages=0 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170451444 kB' 'MemAvailable: 173688132 kB' 'Buffers: 3896 kB' 'Cached: 14729788 kB' 'SwapCached: 0 kB' 'Active: 11614628 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196672 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578448 kB' 'Mapped: 214908 kB' 'Shmem: 10621416 kB' 'KReclaimable: 537268 kB' 'Slab: 1196840 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 659572 kB' 'KernelStack: 20720 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12751168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93101556 kB' 'MemUsed: 4514072 kB' 'SwapCached: 0 kB' 'Active: 1867016 kB' 'Inactive: 219240 kB' 'Active(anon): 1705192 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896336 kB' 'Mapped: 78548 kB' 'AnonPages: 193104 kB' 'Shmem: 1515272 kB' 'KernelStack: 10728 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349228 kB' 'Slab: 654376 kB' 'SReclaimable: 349228 kB' 'SUnreclaim: 305148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.494 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77348260 kB' 'MemUsed: 16417248 kB' 'SwapCached: 0 kB' 'Active: 9749156 kB' 'Inactive: 3475072 kB' 'Active(anon): 9493024 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12837384 kB' 'Mapped: 136864 kB' 'AnonPages: 386908 kB' 'Shmem: 9106180 kB' 'KernelStack: 10136 kB' 'PageTables: 6320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188040 kB' 'Slab: 542464 kB' 'SReclaimable: 188040 kB' 'SUnreclaim: 354424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.496 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.497 node0=512 expecting 512 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.497 node1=512 expecting 512 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.497 00:03:24.497 real 0m2.833s 00:03:24.497 user 0m1.184s 00:03:24.497 sys 0m1.718s 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:24.497 13:44:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.497 ************************************ 00:03:24.497 END TEST per_node_1G_alloc 00:03:24.497 ************************************ 00:03:24.497 13:44:51 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.497 13:44:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.497 13:44:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.497 13:44:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.497 ************************************ 00:03:24.497 START TEST even_2G_alloc 00:03:24.497 ************************************ 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.497 13:44:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.043 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.043 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.043 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.043 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170475456 kB' 'MemAvailable: 173712144 kB' 'Buffers: 3896 kB' 'Cached: 14729904 kB' 'SwapCached: 0 kB' 'Active: 11611176 kB' 'Inactive: 3694312 kB' 'Active(anon): 11193220 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574316 kB' 'Mapped: 213964 kB' 'Shmem: 10621532 kB' 'KReclaimable: 537268 kB' 'Slab: 1195100 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 657832 kB' 'KernelStack: 20720 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12726068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.044 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170476812 kB' 'MemAvailable: 173713500 kB' 'Buffers: 3896 kB' 'Cached: 14729908 kB' 'SwapCached: 0 kB' 'Active: 11610780 kB' 'Inactive: 3694312 kB' 'Active(anon): 11192824 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574532 kB' 'Mapped: 213984 kB' 'Shmem: 10621536 kB' 'KReclaimable: 537268 kB' 'Slab: 1195260 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 657992 kB' 'KernelStack: 20656 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12727576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.045 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.046 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170475472 kB' 'MemAvailable: 173712160 kB' 'Buffers: 3896 kB' 'Cached: 14729924 kB' 'SwapCached: 0 kB' 'Active: 11610456 kB' 'Inactive: 3694312 kB' 'Active(anon): 11192500 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574152 kB' 'Mapped: 213908 kB' 'Shmem: 10621552 kB' 'KReclaimable: 537268 kB' 'Slab: 1195364 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658096 kB' 'KernelStack: 20640 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12726104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.047 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.048 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.049 nr_hugepages=1024 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.049 resv_hugepages=0 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.049 surplus_hugepages=0 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.049 anon_hugepages=0 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170475116 kB' 'MemAvailable: 173711300 kB' 'Buffers: 3896 kB' 'Cached: 14729948 kB' 'SwapCached: 0 kB' 'Active: 11611048 kB' 'Inactive: 3694312 kB' 'Active(anon): 11193092 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574744 kB' 'Mapped: 213908 kB' 'Shmem: 10621576 kB' 'KReclaimable: 537268 kB' 'Slab: 1195336 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658068 kB' 'KernelStack: 20672 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12727620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:27.049 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.050 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.313 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93106080 kB' 'MemUsed: 4509548 kB' 'SwapCached: 0 kB' 'Active: 1864608 kB' 'Inactive: 219240 kB' 'Active(anon): 1702784 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896464 kB' 'Mapped: 78196 kB' 'AnonPages: 190516 kB' 'Shmem: 1515400 kB' 'KernelStack: 10696 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349228 kB' 'Slab: 653164 kB' 'SReclaimable: 349228 kB' 'SUnreclaim: 303936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.314 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.315 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77367348 kB' 'MemUsed: 16398160 kB' 'SwapCached: 0 kB' 'Active: 9746044 kB' 'Inactive: 3475072 kB' 'Active(anon): 9489912 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12837400 kB' 'Mapped: 135712 kB' 'AnonPages: 383756 kB' 'Shmem: 9106196 kB' 'KernelStack: 10072 kB' 'PageTables: 5756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188040 kB' 'Slab: 542168 kB' 'SReclaimable: 188040 kB' 'SUnreclaim: 354128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.316 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.317 node0=512 expecting 512 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:27.317 node1=512 expecting 512 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.317 00:03:27.317 real 0m2.662s 00:03:27.317 user 0m1.047s 00:03:27.317 sys 0m1.646s 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.317 13:44:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.317 ************************************ 00:03:27.317 END TEST even_2G_alloc 00:03:27.317 ************************************ 00:03:27.317 13:44:54 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:27.317 13:44:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.317 13:44:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.317 13:44:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.317 ************************************ 00:03:27.317 START TEST odd_alloc 00:03:27.317 ************************************ 00:03:27.317 13:44:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.318 13:44:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.864 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.864 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.864 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.864 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170439900 kB' 'MemAvailable: 173676588 kB' 'Buffers: 3896 kB' 'Cached: 14730044 kB' 'SwapCached: 0 kB' 'Active: 11613780 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195824 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577364 kB' 'Mapped: 214460 kB' 'Shmem: 10621672 kB' 'KReclaimable: 537268 kB' 'Slab: 1195624 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658356 kB' 'KernelStack: 20688 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12731316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.865 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.866 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170444780 kB' 'MemAvailable: 173681468 kB' 'Buffers: 3896 kB' 'Cached: 14730048 kB' 'SwapCached: 0 kB' 'Active: 11616300 kB' 'Inactive: 3694312 kB' 'Active(anon): 11198344 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579900 kB' 'Mapped: 214764 kB' 'Shmem: 10621676 kB' 'KReclaimable: 537268 kB' 'Slab: 1195664 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658396 kB' 'KernelStack: 20784 kB' 'PageTables: 9568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12732392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317260 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.867 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.868 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170444368 kB' 'MemAvailable: 173681056 kB' 'Buffers: 3896 kB' 'Cached: 14730064 kB' 'SwapCached: 0 kB' 'Active: 11610896 kB' 'Inactive: 3694312 kB' 'Active(anon): 11192940 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574536 kB' 'Mapped: 214260 kB' 'Shmem: 10621692 kB' 'KReclaimable: 537268 kB' 'Slab: 1195648 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658380 kB' 'KernelStack: 20704 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12727916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.869 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.870 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:29.871 nr_hugepages=1025 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.871 resv_hugepages=0 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.871 surplus_hugepages=0 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.871 anon_hugepages=0 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170448544 kB' 'MemAvailable: 173685232 kB' 'Buffers: 3896 kB' 'Cached: 14730068 kB' 'SwapCached: 0 kB' 'Active: 11611156 kB' 'Inactive: 3694312 kB' 'Active(anon): 11193200 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574836 kB' 'Mapped: 213920 kB' 'Shmem: 10621696 kB' 'KReclaimable: 537268 kB' 'Slab: 1195648 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658380 kB' 'KernelStack: 20608 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12727940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:29.871 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.135 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.136 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.137 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93091396 kB' 'MemUsed: 4524232 kB' 'SwapCached: 0 kB' 'Active: 1864380 kB' 'Inactive: 219240 kB' 'Active(anon): 1702556 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896584 kB' 'Mapped: 78196 kB' 'AnonPages: 190176 kB' 'Shmem: 1515520 kB' 'KernelStack: 10696 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349228 kB' 'Slab: 653396 kB' 'SReclaimable: 349228 kB' 'SUnreclaim: 304168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.138 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77353772 kB' 'MemUsed: 16411736 kB' 'SwapCached: 0 kB' 'Active: 9746940 kB' 'Inactive: 3475072 kB' 'Active(anon): 9490808 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12837432 kB' 'Mapped: 135724 kB' 'AnonPages: 384700 kB' 'Shmem: 9106228 kB' 'KernelStack: 10072 kB' 'PageTables: 5556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188040 kB' 'Slab: 542316 kB' 'SReclaimable: 188040 kB' 'SUnreclaim: 354276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.139 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.140 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:30.141 node0=512 expecting 513 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:30.141 node1=513 expecting 512 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:30.141 00:03:30.141 real 0m2.781s 00:03:30.141 user 0m1.172s 00:03:30.141 sys 0m1.673s 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.141 13:44:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.141 ************************************ 00:03:30.141 END TEST odd_alloc 00:03:30.141 ************************************ 00:03:30.141 13:44:57 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:30.141 13:44:57 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.141 13:44:57 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.141 13:44:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.141 ************************************ 00:03:30.141 START TEST custom_alloc 00:03:30.141 ************************************ 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.141 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.142 13:44:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.686 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.686 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.686 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169410512 kB' 'MemAvailable: 172647200 kB' 'Buffers: 3896 kB' 'Cached: 14730204 kB' 'SwapCached: 0 kB' 'Active: 11615056 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197100 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578404 kB' 'Mapped: 214492 kB' 'Shmem: 10621832 kB' 'KReclaimable: 537268 kB' 'Slab: 1195608 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658340 kB' 'KernelStack: 20864 kB' 'PageTables: 9644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12733560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317416 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:32.686 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.687 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169401676 kB' 'MemAvailable: 172638364 kB' 'Buffers: 3896 kB' 'Cached: 14730208 kB' 'SwapCached: 0 kB' 'Active: 11620480 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202524 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583884 kB' 'Mapped: 214492 kB' 'Shmem: 10621836 kB' 'KReclaimable: 537268 kB' 'Slab: 1195596 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658328 kB' 'KernelStack: 21024 kB' 'PageTables: 10324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12739440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317372 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.688 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.689 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.955 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169400196 kB' 'MemAvailable: 172636884 kB' 'Buffers: 3896 kB' 'Cached: 14730220 kB' 'SwapCached: 0 kB' 'Active: 11620072 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202116 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583964 kB' 'Mapped: 214772 kB' 'Shmem: 10621848 kB' 'KReclaimable: 537268 kB' 'Slab: 1195608 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658340 kB' 'KernelStack: 20944 kB' 'PageTables: 9972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12739460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317404 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.956 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.957 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:32.958 nr_hugepages=1536 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.958 resv_hugepages=0 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.958 surplus_hugepages=0 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.958 anon_hugepages=0 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169404944 kB' 'MemAvailable: 172641632 kB' 'Buffers: 3896 kB' 'Cached: 14730244 kB' 'SwapCached: 0 kB' 'Active: 11615012 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197056 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578848 kB' 'Mapped: 214716 kB' 'Shmem: 10621872 kB' 'KReclaimable: 537268 kB' 'Slab: 1195576 kB' 'SReclaimable: 537268 kB' 'SUnreclaim: 658308 kB' 'KernelStack: 20864 kB' 'PageTables: 9572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12733320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317448 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.958 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.959 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.960 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93092904 kB' 'MemUsed: 4522724 kB' 'SwapCached: 0 kB' 'Active: 1865340 kB' 'Inactive: 219240 kB' 'Active(anon): 1703516 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896708 kB' 'Mapped: 78196 kB' 'AnonPages: 191136 kB' 'Shmem: 1515644 kB' 'KernelStack: 10680 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349228 kB' 'Slab: 653148 kB' 'SReclaimable: 349228 kB' 'SUnreclaim: 303920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.961 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76303476 kB' 'MemUsed: 17462032 kB' 'SwapCached: 0 kB' 'Active: 9753648 kB' 'Inactive: 3475072 kB' 'Active(anon): 9497516 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12837456 kB' 'Mapped: 136436 kB' 'AnonPages: 391572 kB' 'Shmem: 9106252 kB' 'KernelStack: 9976 kB' 'PageTables: 5548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188040 kB' 'Slab: 542396 kB' 'SReclaimable: 188040 kB' 'SUnreclaim: 354356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.962 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.963 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.964 node0=512 expecting 512 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:32.964 node1=1024 expecting 1024 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:32.964 00:03:32.964 real 0m2.800s 00:03:32.964 user 0m1.175s 00:03:32.964 sys 0m1.693s 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:32.964 13:45:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.964 ************************************ 00:03:32.964 END TEST custom_alloc 00:03:32.964 ************************************ 00:03:32.964 13:45:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:32.964 13:45:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.964 13:45:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.964 13:45:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.964 ************************************ 00:03:32.964 START TEST no_shrink_alloc 00:03:32.964 ************************************ 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.964 13:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.505 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.505 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.505 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.769 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:35.769 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.769 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.769 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.769 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170398944 kB' 'MemAvailable: 173635600 kB' 'Buffers: 3896 kB' 'Cached: 14730356 kB' 'SwapCached: 0 kB' 'Active: 11620616 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202660 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583532 kB' 'Mapped: 214936 kB' 'Shmem: 10621984 kB' 'KReclaimable: 537204 kB' 'Slab: 1195924 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658720 kB' 'KernelStack: 20640 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12735872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317292 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.770 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170399584 kB' 'MemAvailable: 173636240 kB' 'Buffers: 3896 kB' 'Cached: 14730360 kB' 'SwapCached: 0 kB' 'Active: 11620272 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202316 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583172 kB' 'Mapped: 214904 kB' 'Shmem: 10621988 kB' 'KReclaimable: 537204 kB' 'Slab: 1195916 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658712 kB' 'KernelStack: 20656 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12735888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317260 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.771 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.772 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170400324 kB' 'MemAvailable: 173636980 kB' 'Buffers: 3896 kB' 'Cached: 14730380 kB' 'SwapCached: 0 kB' 'Active: 11620260 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202304 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583176 kB' 'Mapped: 214904 kB' 'Shmem: 10622008 kB' 'KReclaimable: 537204 kB' 'Slab: 1195916 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658712 kB' 'KernelStack: 20656 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12735912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317260 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.773 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.774 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.775 nr_hugepages=1024 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.775 resv_hugepages=0 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.775 surplus_hugepages=0 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.775 anon_hugepages=0 00:03:35.775 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170400940 kB' 'MemAvailable: 173637596 kB' 'Buffers: 3896 kB' 'Cached: 14730400 kB' 'SwapCached: 0 kB' 'Active: 11620276 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202320 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583176 kB' 'Mapped: 214904 kB' 'Shmem: 10622028 kB' 'KReclaimable: 537204 kB' 'Slab: 1195916 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658712 kB' 'KernelStack: 20656 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12735932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317260 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.777 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92040344 kB' 'MemUsed: 5575284 kB' 'SwapCached: 0 kB' 'Active: 1873612 kB' 'Inactive: 219240 kB' 'Active(anon): 1711788 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896860 kB' 'Mapped: 78424 kB' 'AnonPages: 199292 kB' 'Shmem: 1515796 kB' 'KernelStack: 10744 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349196 kB' 'Slab: 653720 kB' 'SReclaimable: 349196 kB' 'SUnreclaim: 304524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.779 node0=1024 expecting 1024 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.779 13:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.320 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.320 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.320 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.321 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.321 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.321 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.321 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170406208 kB' 'MemAvailable: 173642864 kB' 'Buffers: 3896 kB' 'Cached: 14730480 kB' 'SwapCached: 0 kB' 'Active: 11620660 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202704 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583816 kB' 'Mapped: 214812 kB' 'Shmem: 10622108 kB' 'KReclaimable: 537204 kB' 'Slab: 1196016 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658812 kB' 'KernelStack: 20688 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12736576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317324 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.618 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.619 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170406764 kB' 'MemAvailable: 173643420 kB' 'Buffers: 3896 kB' 'Cached: 14730484 kB' 'SwapCached: 0 kB' 'Active: 11620352 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202396 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583528 kB' 'Mapped: 214780 kB' 'Shmem: 10622112 kB' 'KReclaimable: 537204 kB' 'Slab: 1196132 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658928 kB' 'KernelStack: 20688 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12736592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317276 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.620 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.621 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170406436 kB' 'MemAvailable: 173643092 kB' 'Buffers: 3896 kB' 'Cached: 14730484 kB' 'SwapCached: 0 kB' 'Active: 11620380 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202424 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583556 kB' 'Mapped: 214780 kB' 'Shmem: 10622112 kB' 'KReclaimable: 537204 kB' 'Slab: 1196116 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658912 kB' 'KernelStack: 20688 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12736616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317276 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.622 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.623 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.624 nr_hugepages=1024 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.624 resv_hugepages=0 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.624 surplus_hugepages=0 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.624 anon_hugepages=0 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170408092 kB' 'MemAvailable: 173644748 kB' 'Buffers: 3896 kB' 'Cached: 14730520 kB' 'SwapCached: 0 kB' 'Active: 11620744 kB' 'Inactive: 3694312 kB' 'Active(anon): 11202788 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583856 kB' 'Mapped: 214780 kB' 'Shmem: 10622148 kB' 'KReclaimable: 537204 kB' 'Slab: 1196116 kB' 'SReclaimable: 537204 kB' 'SUnreclaim: 658912 kB' 'KernelStack: 20672 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12736620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317260 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.624 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.625 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92039556 kB' 'MemUsed: 5576072 kB' 'SwapCached: 0 kB' 'Active: 1873112 kB' 'Inactive: 219240 kB' 'Active(anon): 1711288 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1896976 kB' 'Mapped: 78348 kB' 'AnonPages: 198504 kB' 'Shmem: 1515912 kB' 'KernelStack: 10776 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349196 kB' 'Slab: 654024 kB' 'SReclaimable: 349196 kB' 'SUnreclaim: 304828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.626 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.627 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.628 node0=1024 expecting 1024 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.628 00:03:38.628 real 0m5.614s 00:03:38.628 user 0m2.348s 00:03:38.628 sys 0m3.405s 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.628 13:45:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.628 ************************************ 00:03:38.628 END TEST no_shrink_alloc 00:03:38.628 ************************************ 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.628 13:45:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.628 00:03:38.628 real 0m20.983s 00:03:38.628 user 0m8.319s 00:03:38.628 sys 0m12.235s 00:03:38.628 13:45:05 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.628 13:45:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.628 ************************************ 00:03:38.628 END TEST hugepages 00:03:38.628 ************************************ 00:03:38.628 13:45:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:38.628 13:45:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.628 13:45:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.628 13:45:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.628 ************************************ 00:03:38.628 START TEST driver 00:03:38.628 ************************************ 00:03:38.628 13:45:06 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:38.888 * Looking for test storage... 00:03:38.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.888 13:45:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:38.888 13:45:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.888 13:45:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.091 13:45:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:43.091 13:45:09 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.091 13:45:09 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.091 13:45:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:43.091 ************************************ 00:03:43.091 START TEST guess_driver 00:03:43.091 ************************************ 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:43.091 13:45:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:43.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:43.091 Looking for driver=vfio-pci 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.091 13:45:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.631 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.632 13:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.202 13:45:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.202 13:45:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.202 13:45:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.461 13:45:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:46.461 13:45:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:46.461 13:45:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.461 13:45:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.655 00:03:50.655 real 0m7.517s 00:03:50.655 user 0m2.144s 00:03:50.655 sys 0m3.819s 00:03:50.655 13:45:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.655 13:45:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.655 ************************************ 00:03:50.655 END TEST guess_driver 00:03:50.655 ************************************ 00:03:50.655 00:03:50.655 real 0m11.504s 00:03:50.655 user 0m3.307s 00:03:50.655 sys 0m5.932s 00:03:50.655 13:45:17 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.655 13:45:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.655 ************************************ 00:03:50.655 END TEST driver 00:03:50.656 ************************************ 00:03:50.656 13:45:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.656 13:45:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.656 13:45:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.656 13:45:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.656 ************************************ 00:03:50.656 START TEST devices 00:03:50.656 ************************************ 00:03:50.656 13:45:17 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.656 * Looking for test storage... 00:03:50.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:50.656 13:45:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.656 13:45:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:50.656 13:45:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.656 13:45:17 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:53.942 13:45:20 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.942 No valid GPT data, bailing 00:03:53.942 13:45:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.942 13:45:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.942 13:45:20 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.942 13:45:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.942 13:45:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.942 ************************************ 00:03:53.942 START TEST nvme_mount 00:03:53.942 ************************************ 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.942 13:45:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:54.511 Creating new GPT entries in memory. 00:03:54.511 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.511 other utilities. 00:03:54.511 13:45:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.511 13:45:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.511 13:45:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.511 13:45:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.511 13:45:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:55.448 Creating new GPT entries in memory. 00:03:55.448 The operation has completed successfully. 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2765175 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:55.448 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.708 13:45:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.251 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:58.510 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.510 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.510 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.510 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:58.510 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.510 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.510 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.769 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:58.769 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:58.769 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.769 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:58.769 13:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:58.769 13:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:58.769 13:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.769 13:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:58.769 13:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.769 13:45:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.302 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.562 13:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.096 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.096 00:04:04.096 real 0m10.435s 00:04:04.096 user 0m2.972s 00:04:04.096 sys 0m5.273s 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.096 13:45:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:04.096 ************************************ 00:04:04.096 END TEST nvme_mount 00:04:04.096 ************************************ 00:04:04.096 13:45:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:04.096 13:45:31 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.096 13:45:31 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.096 13:45:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:04.096 ************************************ 00:04:04.096 START TEST dm_mount 00:04:04.096 ************************************ 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.096 13:45:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:05.035 Creating new GPT entries in memory. 00:04:05.035 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.035 other utilities. 00:04:05.035 13:45:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.035 13:45:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.035 13:45:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.035 13:45:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.035 13:45:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.977 Creating new GPT entries in memory. 00:04:05.977 The operation has completed successfully. 00:04:05.977 13:45:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.977 13:45:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.977 13:45:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.977 13:45:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.977 13:45:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:06.916 The operation has completed successfully. 00:04:06.916 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.916 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2769349 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.175 13:45:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.712 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.971 13:45:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.505 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:12.765 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.765 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:12.765 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.765 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:12.765 13:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:12.765 00:04:12.765 real 0m8.691s 00:04:12.765 user 0m2.106s 00:04:12.765 sys 0m3.630s 00:04:12.765 13:45:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.765 13:45:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:12.765 ************************************ 00:04:12.765 END TEST dm_mount 00:04:12.765 ************************************ 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.765 13:45:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.023 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:13.023 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:13.023 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.023 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.023 13:45:40 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:13.023 00:04:13.023 real 0m22.683s 00:04:13.023 user 0m6.296s 00:04:13.023 sys 0m11.115s 00:04:13.023 13:45:40 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.023 13:45:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.023 ************************************ 00:04:13.023 END TEST devices 00:04:13.023 ************************************ 00:04:13.023 00:04:13.023 real 1m14.295s 00:04:13.023 user 0m24.312s 00:04:13.023 sys 0m40.704s 00:04:13.023 13:45:40 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.023 13:45:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.023 ************************************ 00:04:13.023 END TEST setup.sh 00:04:13.023 ************************************ 00:04:13.023 13:45:40 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:15.557 Hugepages 00:04:15.557 node hugesize free / total 00:04:15.557 node0 1048576kB 0 / 0 00:04:15.557 node0 2048kB 2048 / 2048 00:04:15.557 node1 1048576kB 0 / 0 00:04:15.557 node1 2048kB 0 / 0 00:04:15.557 00:04:15.557 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.557 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:15.557 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:15.557 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:15.557 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:15.557 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:15.557 13:45:42 -- spdk/autotest.sh@130 -- # uname -s 00:04:15.557 13:45:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:15.557 13:45:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:15.557 13:45:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.846 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.846 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.105 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.364 13:45:46 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:20.303 13:45:47 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:20.303 13:45:47 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:20.303 13:45:47 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.303 13:45:47 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:20.303 13:45:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:20.303 13:45:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:20.303 13:45:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.303 13:45:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.303 13:45:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:20.303 13:45:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:20.303 13:45:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:20.303 13:45:47 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.594 Waiting for block devices as requested 00:04:23.594 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:23.594 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.594 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.854 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.854 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.854 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.113 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.113 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.113 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.113 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.372 13:45:51 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:24.372 13:45:51 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:24.372 13:45:51 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:24.372 13:45:51 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:24.372 13:45:51 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:24.372 13:45:51 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:24.372 13:45:51 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:24.372 13:45:51 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:24.372 13:45:51 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:24.372 13:45:51 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:24.372 13:45:51 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:24.372 13:45:51 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:24.372 13:45:51 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:24.372 13:45:51 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:24.372 13:45:51 -- common/autotest_common.sh@1557 -- # continue 00:04:24.372 13:45:51 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:24.372 13:45:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.372 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:04:24.372 13:45:51 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:24.372 13:45:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.372 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:04:24.372 13:45:51 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.906 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.906 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.476 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.736 13:45:54 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:27.736 13:45:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.736 13:45:54 -- common/autotest_common.sh@10 -- # set +x 00:04:27.736 13:45:54 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:27.736 13:45:54 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:27.736 13:45:54 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.736 13:45:54 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:27.736 13:45:54 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:27.736 13:45:54 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:27.736 13:45:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:27.736 13:45:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:27.736 13:45:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.736 13:45:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:27.736 13:45:54 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.736 13:45:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:27.736 13:45:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:27.736 13:45:55 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:27.736 13:45:55 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:27.736 13:45:55 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:27.736 13:45:55 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:27.736 13:45:55 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:27.736 13:45:55 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:27.736 13:45:55 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:27.736 13:45:55 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2778111 00:04:27.736 13:45:55 -- common/autotest_common.sh@1598 -- # waitforlisten 2778111 00:04:27.736 13:45:55 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.736 13:45:55 -- common/autotest_common.sh@831 -- # '[' -z 2778111 ']' 00:04:27.736 13:45:55 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.736 13:45:55 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.736 13:45:55 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.736 13:45:55 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.736 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:04:27.736 [2024-07-26 13:45:55.101708] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:27.736 [2024-07-26 13:45:55.101756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778111 ] 00:04:27.736 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.736 [2024-07-26 13:45:55.156241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.995 [2024-07-26 13:45:55.239170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.564 13:45:55 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.564 13:45:55 -- common/autotest_common.sh@864 -- # return 0 00:04:28.564 13:45:55 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:28.564 13:45:55 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:28.564 13:45:55 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:31.886 nvme0n1 00:04:31.886 13:45:58 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:31.886 [2024-07-26 13:45:59.048488] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:31.886 request: 00:04:31.886 { 00:04:31.886 "nvme_ctrlr_name": "nvme0", 00:04:31.886 "password": "test", 00:04:31.886 "method": "bdev_nvme_opal_revert", 00:04:31.886 "req_id": 1 00:04:31.886 } 00:04:31.886 Got JSON-RPC error response 00:04:31.886 response: 00:04:31.886 { 00:04:31.886 "code": -32602, 00:04:31.886 "message": "Invalid parameters" 00:04:31.886 } 00:04:31.886 13:45:59 -- common/autotest_common.sh@1604 -- # true 00:04:31.886 13:45:59 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:31.886 13:45:59 -- common/autotest_common.sh@1608 -- # killprocess 2778111 00:04:31.886 13:45:59 -- common/autotest_common.sh@950 -- # '[' -z 2778111 ']' 00:04:31.886 13:45:59 -- common/autotest_common.sh@954 -- # kill -0 2778111 00:04:31.886 13:45:59 -- common/autotest_common.sh@955 -- # uname 00:04:31.886 13:45:59 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.886 13:45:59 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2778111 00:04:31.886 13:45:59 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.886 13:45:59 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.886 13:45:59 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2778111' 00:04:31.886 killing process with pid 2778111 00:04:31.886 13:45:59 -- common/autotest_common.sh@969 -- # kill 2778111 00:04:31.886 13:45:59 -- common/autotest_common.sh@974 -- # wait 2778111 00:04:33.791 13:46:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:33.791 13:46:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:33.791 13:46:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:33.791 13:46:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:33.791 13:46:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:33.791 13:46:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.791 13:46:00 -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 13:46:00 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:33.791 13:46:00 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:33.791 13:46:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.791 13:46:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.791 13:46:00 -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 START TEST env 00:04:33.791 ************************************ 00:04:33.791 13:46:00 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:33.791 * Looking for test storage... 00:04:33.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:33.791 13:46:00 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.791 13:46:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.791 13:46:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.791 13:46:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 START TEST env_memory 00:04:33.791 ************************************ 00:04:33.791 13:46:00 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.791 00:04:33.791 00:04:33.791 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.791 http://cunit.sourceforge.net/ 00:04:33.791 00:04:33.791 00:04:33.791 Suite: memory 00:04:33.791 Test: alloc and free memory map ...[2024-07-26 13:46:00.900515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.791 passed 00:04:33.791 Test: mem map translation ...[2024-07-26 13:46:00.918458] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.791 [2024-07-26 13:46:00.918474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.791 [2024-07-26 13:46:00.918509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.791 [2024-07-26 13:46:00.918516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.791 passed 00:04:33.791 Test: mem map registration ...[2024-07-26 13:46:00.955011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:33.791 [2024-07-26 13:46:00.955025] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:33.791 passed 00:04:33.791 Test: mem map adjacent registrations ...passed 00:04:33.791 00:04:33.791 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.791 suites 1 1 n/a 0 0 00:04:33.791 tests 4 4 4 0 0 00:04:33.791 asserts 152 152 152 0 n/a 00:04:33.791 00:04:33.791 Elapsed time = 0.132 seconds 00:04:33.791 00:04:33.791 real 0m0.144s 00:04:33.791 user 0m0.136s 00:04:33.791 sys 0m0.008s 00:04:33.791 13:46:01 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.791 13:46:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 END TEST env_memory 00:04:33.791 ************************************ 00:04:33.791 13:46:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:33.791 13:46:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.791 13:46:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.791 13:46:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 START TEST env_vtophys 00:04:33.791 ************************************ 00:04:33.791 13:46:01 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:33.791 EAL: lib.eal log level changed from notice to debug 00:04:33.791 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.791 EAL: Detected lcore 1 as core 1 on socket 0 00:04:33.791 EAL: Detected lcore 2 as core 2 on socket 0 00:04:33.791 EAL: Detected lcore 3 as core 3 on socket 0 00:04:33.791 EAL: Detected lcore 4 as core 4 on socket 0 00:04:33.791 EAL: Detected lcore 5 as core 5 on socket 0 00:04:33.791 EAL: Detected lcore 6 as core 6 on socket 0 00:04:33.791 EAL: Detected lcore 7 as core 8 on socket 0 00:04:33.791 EAL: Detected lcore 8 as core 9 on socket 0 00:04:33.791 EAL: Detected lcore 9 as core 10 on socket 0 00:04:33.791 EAL: Detected lcore 10 as core 11 on socket 0 00:04:33.791 EAL: Detected lcore 11 as core 12 on socket 0 00:04:33.791 EAL: Detected lcore 12 as core 13 on socket 0 00:04:33.791 EAL: Detected lcore 13 as core 16 on socket 0 00:04:33.791 EAL: Detected lcore 14 as core 17 on socket 0 00:04:33.791 EAL: Detected lcore 15 as core 18 on socket 0 00:04:33.791 EAL: Detected lcore 16 as core 19 on socket 0 00:04:33.791 EAL: Detected lcore 17 as core 20 on socket 0 00:04:33.791 EAL: Detected lcore 18 as core 21 on socket 0 00:04:33.791 EAL: Detected lcore 19 as core 25 on socket 0 00:04:33.791 EAL: Detected lcore 20 as core 26 on socket 0 00:04:33.791 EAL: Detected lcore 21 as core 27 on socket 0 00:04:33.791 EAL: Detected lcore 22 as core 28 on socket 0 00:04:33.791 EAL: Detected lcore 23 as core 29 on socket 0 00:04:33.791 EAL: Detected lcore 24 as core 0 on socket 1 00:04:33.791 EAL: Detected lcore 25 as core 1 on socket 1 00:04:33.791 EAL: Detected lcore 26 as core 2 on socket 1 00:04:33.791 EAL: Detected lcore 27 as core 3 on socket 1 00:04:33.791 EAL: Detected lcore 28 as core 4 on socket 1 00:04:33.791 EAL: Detected lcore 29 as core 5 on socket 1 00:04:33.791 EAL: Detected lcore 30 as core 6 on socket 1 00:04:33.791 EAL: Detected lcore 31 as core 9 on socket 1 00:04:33.791 EAL: Detected lcore 32 as core 10 on socket 1 00:04:33.791 EAL: Detected lcore 33 as core 11 on socket 1 00:04:33.791 EAL: Detected lcore 34 as core 12 on socket 1 00:04:33.791 EAL: Detected lcore 35 as core 13 on socket 1 00:04:33.791 EAL: Detected lcore 36 as core 16 on socket 1 00:04:33.791 EAL: Detected lcore 37 as core 17 on socket 1 00:04:33.791 EAL: Detected lcore 38 as core 18 on socket 1 00:04:33.791 EAL: Detected lcore 39 as core 19 on socket 1 00:04:33.791 EAL: Detected lcore 40 as core 20 on socket 1 00:04:33.791 EAL: Detected lcore 41 as core 21 on socket 1 00:04:33.791 EAL: Detected lcore 42 as core 24 on socket 1 00:04:33.791 EAL: Detected lcore 43 as core 25 on socket 1 00:04:33.791 EAL: Detected lcore 44 as core 26 on socket 1 00:04:33.791 EAL: Detected lcore 45 as core 27 on socket 1 00:04:33.791 EAL: Detected lcore 46 as core 28 on socket 1 00:04:33.791 EAL: Detected lcore 47 as core 29 on socket 1 00:04:33.791 EAL: Detected lcore 48 as core 0 on socket 0 00:04:33.791 EAL: Detected lcore 49 as core 1 on socket 0 00:04:33.791 EAL: Detected lcore 50 as core 2 on socket 0 00:04:33.791 EAL: Detected lcore 51 as core 3 on socket 0 00:04:33.791 EAL: Detected lcore 52 as core 4 on socket 0 00:04:33.791 EAL: Detected lcore 53 as core 5 on socket 0 00:04:33.791 EAL: Detected lcore 54 as core 6 on socket 0 00:04:33.791 EAL: Detected lcore 55 as core 8 on socket 0 00:04:33.791 EAL: Detected lcore 56 as core 9 on socket 0 00:04:33.791 EAL: Detected lcore 57 as core 10 on socket 0 00:04:33.791 EAL: Detected lcore 58 as core 11 on socket 0 00:04:33.791 EAL: Detected lcore 59 as core 12 on socket 0 00:04:33.791 EAL: Detected lcore 60 as core 13 on socket 0 00:04:33.791 EAL: Detected lcore 61 as core 16 on socket 0 00:04:33.791 EAL: Detected lcore 62 as core 17 on socket 0 00:04:33.791 EAL: Detected lcore 63 as core 18 on socket 0 00:04:33.791 EAL: Detected lcore 64 as core 19 on socket 0 00:04:33.791 EAL: Detected lcore 65 as core 20 on socket 0 00:04:33.792 EAL: Detected lcore 66 as core 21 on socket 0 00:04:33.792 EAL: Detected lcore 67 as core 25 on socket 0 00:04:33.792 EAL: Detected lcore 68 as core 26 on socket 0 00:04:33.792 EAL: Detected lcore 69 as core 27 on socket 0 00:04:33.792 EAL: Detected lcore 70 as core 28 on socket 0 00:04:33.792 EAL: Detected lcore 71 as core 29 on socket 0 00:04:33.792 EAL: Detected lcore 72 as core 0 on socket 1 00:04:33.792 EAL: Detected lcore 73 as core 1 on socket 1 00:04:33.792 EAL: Detected lcore 74 as core 2 on socket 1 00:04:33.792 EAL: Detected lcore 75 as core 3 on socket 1 00:04:33.792 EAL: Detected lcore 76 as core 4 on socket 1 00:04:33.792 EAL: Detected lcore 77 as core 5 on socket 1 00:04:33.792 EAL: Detected lcore 78 as core 6 on socket 1 00:04:33.792 EAL: Detected lcore 79 as core 9 on socket 1 00:04:33.792 EAL: Detected lcore 80 as core 10 on socket 1 00:04:33.792 EAL: Detected lcore 81 as core 11 on socket 1 00:04:33.792 EAL: Detected lcore 82 as core 12 on socket 1 00:04:33.792 EAL: Detected lcore 83 as core 13 on socket 1 00:04:33.792 EAL: Detected lcore 84 as core 16 on socket 1 00:04:33.792 EAL: Detected lcore 85 as core 17 on socket 1 00:04:33.792 EAL: Detected lcore 86 as core 18 on socket 1 00:04:33.792 EAL: Detected lcore 87 as core 19 on socket 1 00:04:33.792 EAL: Detected lcore 88 as core 20 on socket 1 00:04:33.792 EAL: Detected lcore 89 as core 21 on socket 1 00:04:33.792 EAL: Detected lcore 90 as core 24 on socket 1 00:04:33.792 EAL: Detected lcore 91 as core 25 on socket 1 00:04:33.792 EAL: Detected lcore 92 as core 26 on socket 1 00:04:33.792 EAL: Detected lcore 93 as core 27 on socket 1 00:04:33.792 EAL: Detected lcore 94 as core 28 on socket 1 00:04:33.792 EAL: Detected lcore 95 as core 29 on socket 1 00:04:33.792 EAL: Maximum logical cores by configuration: 128 00:04:33.792 EAL: Detected CPU lcores: 96 00:04:33.792 EAL: Detected NUMA nodes: 2 00:04:33.792 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:33.792 EAL: Detected shared linkage of DPDK 00:04:33.792 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.792 EAL: Bus pci wants IOVA as 'DC' 00:04:33.792 EAL: Buses did not request a specific IOVA mode. 00:04:33.792 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:33.792 EAL: Selected IOVA mode 'VA' 00:04:33.792 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.792 EAL: Probing VFIO support... 00:04:33.792 EAL: IOMMU type 1 (Type 1) is supported 00:04:33.792 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:33.792 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:33.792 EAL: VFIO support initialized 00:04:33.792 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.792 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.792 EAL: Setting up physically contiguous memory... 00:04:33.792 EAL: Setting maximum number of open files to 524288 00:04:33.792 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.792 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:33.792 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.792 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:33.792 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.792 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:33.792 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.792 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.792 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:33.792 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:33.792 EAL: Hugepages will be freed exactly as allocated. 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: TSC frequency is ~2300000 KHz 00:04:33.792 EAL: Main lcore 0 is ready (tid=7f3071378a00;cpuset=[0]) 00:04:33.792 EAL: Trying to obtain current memory policy. 00:04:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.792 EAL: Restoring previous memory policy: 0 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.792 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.792 00:04:33.792 00:04:33.792 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.792 http://cunit.sourceforge.net/ 00:04:33.792 00:04:33.792 00:04:33.792 Suite: components_suite 00:04:33.792 Test: vtophys_malloc_test ...passed 00:04:33.792 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.792 EAL: Restoring previous memory policy: 4 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.792 EAL: Trying to obtain current memory policy. 00:04:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.792 EAL: Restoring previous memory policy: 4 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.792 EAL: Trying to obtain current memory policy. 00:04:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.792 EAL: Restoring previous memory policy: 4 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.792 EAL: Trying to obtain current memory policy. 00:04:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.792 EAL: Restoring previous memory policy: 4 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.792 EAL: request: mp_malloc_sync 00:04:33.792 EAL: No shared files mode enabled, IPC is disabled 00:04:33.792 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.792 EAL: Trying to obtain current memory policy. 00:04:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.792 EAL: Restoring previous memory policy: 4 00:04:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.793 EAL: request: mp_malloc_sync 00:04:33.793 EAL: No shared files mode enabled, IPC is disabled 00:04:33.793 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.793 EAL: request: mp_malloc_sync 00:04:33.793 EAL: No shared files mode enabled, IPC is disabled 00:04:33.793 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.793 EAL: Trying to obtain current memory policy. 00:04:33.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.793 EAL: Restoring previous memory policy: 4 00:04:33.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.793 EAL: request: mp_malloc_sync 00:04:33.793 EAL: No shared files mode enabled, IPC is disabled 00:04:33.793 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.793 EAL: request: mp_malloc_sync 00:04:33.793 EAL: No shared files mode enabled, IPC is disabled 00:04:33.793 EAL: Heap on socket 0 was shrunk by 66MB 00:04:33.793 EAL: Trying to obtain current memory policy. 00:04:33.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.793 EAL: Restoring previous memory policy: 4 00:04:33.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.793 EAL: request: mp_malloc_sync 00:04:33.793 EAL: No shared files mode enabled, IPC is disabled 00:04:33.793 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.066 EAL: request: mp_malloc_sync 00:04:34.066 EAL: No shared files mode enabled, IPC is disabled 00:04:34.066 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.066 EAL: Trying to obtain current memory policy. 00:04:34.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.066 EAL: Restoring previous memory policy: 4 00:04:34.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.066 EAL: request: mp_malloc_sync 00:04:34.066 EAL: No shared files mode enabled, IPC is disabled 00:04:34.066 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.066 EAL: request: mp_malloc_sync 00:04:34.066 EAL: No shared files mode enabled, IPC is disabled 00:04:34.066 EAL: Heap on socket 0 was shrunk by 258MB 00:04:34.066 EAL: Trying to obtain current memory policy. 00:04:34.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.066 EAL: Restoring previous memory policy: 4 00:04:34.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.066 EAL: request: mp_malloc_sync 00:04:34.066 EAL: No shared files mode enabled, IPC is disabled 00:04:34.066 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.325 EAL: request: mp_malloc_sync 00:04:34.325 EAL: No shared files mode enabled, IPC is disabled 00:04:34.325 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.325 EAL: Trying to obtain current memory policy. 00:04:34.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.584 EAL: Restoring previous memory policy: 4 00:04:34.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.584 EAL: request: mp_malloc_sync 00:04:34.584 EAL: No shared files mode enabled, IPC is disabled 00:04:34.584 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.844 EAL: request: mp_malloc_sync 00:04:34.844 EAL: No shared files mode enabled, IPC is disabled 00:04:34.844 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:34.844 passed 00:04:34.844 00:04:34.844 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.844 suites 1 1 n/a 0 0 00:04:34.844 tests 2 2 2 0 0 00:04:34.844 asserts 497 497 497 0 n/a 00:04:34.844 00:04:34.844 Elapsed time = 0.963 seconds 00:04:34.844 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.844 EAL: request: mp_malloc_sync 00:04:34.844 EAL: No shared files mode enabled, IPC is disabled 00:04:34.844 EAL: Heap on socket 0 was shrunk by 2MB 00:04:34.844 EAL: No shared files mode enabled, IPC is disabled 00:04:34.844 EAL: No shared files mode enabled, IPC is disabled 00:04:34.844 EAL: No shared files mode enabled, IPC is disabled 00:04:34.844 00:04:34.844 real 0m1.075s 00:04:34.844 user 0m0.636s 00:04:34.844 sys 0m0.409s 00:04:34.844 13:46:02 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.844 13:46:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:34.844 ************************************ 00:04:34.844 END TEST env_vtophys 00:04:34.844 ************************************ 00:04:34.844 13:46:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.844 13:46:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.844 13:46:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.844 13:46:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.844 ************************************ 00:04:34.844 START TEST env_pci 00:04:34.844 ************************************ 00:04:34.844 13:46:02 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.844 00:04:34.844 00:04:34.844 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.844 http://cunit.sourceforge.net/ 00:04:34.844 00:04:34.844 00:04:34.844 Suite: pci 00:04:34.844 Test: pci_hook ...[2024-07-26 13:46:02.231550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2779467 has claimed it 00:04:34.844 EAL: Cannot find device (10000:00:01.0) 00:04:34.844 EAL: Failed to attach device on primary process 00:04:34.844 passed 00:04:34.844 00:04:34.844 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.844 suites 1 1 n/a 0 0 00:04:34.844 tests 1 1 1 0 0 00:04:34.844 asserts 25 25 25 0 n/a 00:04:34.844 00:04:34.844 Elapsed time = 0.026 seconds 00:04:34.844 00:04:34.844 real 0m0.046s 00:04:34.844 user 0m0.016s 00:04:34.844 sys 0m0.029s 00:04:34.844 13:46:02 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.844 13:46:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:34.844 ************************************ 00:04:34.844 END TEST env_pci 00:04:34.844 ************************************ 00:04:35.104 13:46:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.104 13:46:02 env -- env/env.sh@15 -- # uname 00:04:35.104 13:46:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.104 13:46:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.104 13:46:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.104 13:46:02 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:35.104 13:46:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.104 13:46:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.104 ************************************ 00:04:35.104 START TEST env_dpdk_post_init 00:04:35.104 ************************************ 00:04:35.104 13:46:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.104 EAL: Detected CPU lcores: 96 00:04:35.104 EAL: Detected NUMA nodes: 2 00:04:35.104 EAL: Detected shared linkage of DPDK 00:04:35.104 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.104 EAL: Selected IOVA mode 'VA' 00:04:35.104 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.104 EAL: VFIO support initialized 00:04:35.104 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.104 EAL: Using IOMMU type 1 (Type 1) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:35.104 EAL: Ignore mapping IO port bar(1) 00:04:35.104 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:35.363 EAL: Ignore mapping IO port bar(1) 00:04:35.363 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:35.933 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:35.933 EAL: Ignore mapping IO port bar(1) 00:04:35.933 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:36.192 EAL: Ignore mapping IO port bar(1) 00:04:36.192 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:39.482 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:39.482 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:39.482 Starting DPDK initialization... 00:04:39.482 Starting SPDK post initialization... 00:04:39.482 SPDK NVMe probe 00:04:39.482 Attaching to 0000:5e:00.0 00:04:39.482 Attached to 0000:5e:00.0 00:04:39.482 Cleaning up... 00:04:39.482 00:04:39.482 real 0m4.310s 00:04:39.482 user 0m3.251s 00:04:39.482 sys 0m0.129s 00:04:39.482 13:46:06 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.482 13:46:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.482 ************************************ 00:04:39.482 END TEST env_dpdk_post_init 00:04:39.482 ************************************ 00:04:39.482 13:46:06 env -- env/env.sh@26 -- # uname 00:04:39.482 13:46:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.482 13:46:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.482 13:46:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.482 13:46:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.482 13:46:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.482 ************************************ 00:04:39.482 START TEST env_mem_callbacks 00:04:39.482 ************************************ 00:04:39.482 13:46:06 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.482 EAL: Detected CPU lcores: 96 00:04:39.482 EAL: Detected NUMA nodes: 2 00:04:39.482 EAL: Detected shared linkage of DPDK 00:04:39.482 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.482 EAL: Selected IOVA mode 'VA' 00:04:39.482 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.482 EAL: VFIO support initialized 00:04:39.482 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.482 00:04:39.482 00:04:39.482 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.482 http://cunit.sourceforge.net/ 00:04:39.482 00:04:39.482 00:04:39.482 Suite: memory 00:04:39.482 Test: test ... 00:04:39.482 register 0x200000200000 2097152 00:04:39.482 malloc 3145728 00:04:39.482 register 0x200000400000 4194304 00:04:39.482 buf 0x200000500000 len 3145728 PASSED 00:04:39.482 malloc 64 00:04:39.482 buf 0x2000004fff40 len 64 PASSED 00:04:39.482 malloc 4194304 00:04:39.482 register 0x200000800000 6291456 00:04:39.482 buf 0x200000a00000 len 4194304 PASSED 00:04:39.482 free 0x200000500000 3145728 00:04:39.482 free 0x2000004fff40 64 00:04:39.482 unregister 0x200000400000 4194304 PASSED 00:04:39.482 free 0x200000a00000 4194304 00:04:39.482 unregister 0x200000800000 6291456 PASSED 00:04:39.482 malloc 8388608 00:04:39.482 register 0x200000400000 10485760 00:04:39.482 buf 0x200000600000 len 8388608 PASSED 00:04:39.482 free 0x200000600000 8388608 00:04:39.482 unregister 0x200000400000 10485760 PASSED 00:04:39.482 passed 00:04:39.482 00:04:39.482 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.482 suites 1 1 n/a 0 0 00:04:39.482 tests 1 1 1 0 0 00:04:39.482 asserts 15 15 15 0 n/a 00:04:39.482 00:04:39.482 Elapsed time = 0.004 seconds 00:04:39.482 00:04:39.482 real 0m0.040s 00:04:39.482 user 0m0.010s 00:04:39.482 sys 0m0.029s 00:04:39.482 13:46:06 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.482 13:46:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.482 ************************************ 00:04:39.482 END TEST env_mem_callbacks 00:04:39.482 ************************************ 00:04:39.482 00:04:39.482 real 0m6.048s 00:04:39.482 user 0m4.222s 00:04:39.482 sys 0m0.895s 00:04:39.482 13:46:06 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.482 13:46:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.482 ************************************ 00:04:39.482 END TEST env 00:04:39.482 ************************************ 00:04:39.482 13:46:06 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.482 13:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.482 13:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.482 13:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:39.482 ************************************ 00:04:39.482 START TEST rpc 00:04:39.482 ************************************ 00:04:39.482 13:46:06 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.741 * Looking for test storage... 00:04:39.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.741 13:46:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2780292 00:04:39.741 13:46:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.741 13:46:06 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:39.741 13:46:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2780292 00:04:39.741 13:46:06 rpc -- common/autotest_common.sh@831 -- # '[' -z 2780292 ']' 00:04:39.741 13:46:06 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.741 13:46:06 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.741 13:46:06 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.742 13:46:06 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.742 13:46:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.742 [2024-07-26 13:46:07.002999] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:39.742 [2024-07-26 13:46:07.003051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780292 ] 00:04:39.742 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.742 [2024-07-26 13:46:07.057471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.742 [2024-07-26 13:46:07.137087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.742 [2024-07-26 13:46:07.137122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2780292' to capture a snapshot of events at runtime. 00:04:39.742 [2024-07-26 13:46:07.137130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.742 [2024-07-26 13:46:07.137136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.742 [2024-07-26 13:46:07.137141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2780292 for offline analysis/debug. 00:04:39.742 [2024-07-26 13:46:07.137157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.679 13:46:07 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.679 13:46:07 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:40.679 13:46:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.679 13:46:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.679 13:46:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.679 13:46:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.679 13:46:07 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.679 13:46:07 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.679 13:46:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.679 ************************************ 00:04:40.679 START TEST rpc_integrity 00:04:40.679 ************************************ 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.679 { 00:04:40.679 "name": "Malloc0", 00:04:40.679 "aliases": [ 00:04:40.679 "e7e3ecfb-b3cd-43fe-a031-60201df4df04" 00:04:40.679 ], 00:04:40.679 "product_name": "Malloc disk", 00:04:40.679 "block_size": 512, 00:04:40.679 "num_blocks": 16384, 00:04:40.679 "uuid": "e7e3ecfb-b3cd-43fe-a031-60201df4df04", 00:04:40.679 "assigned_rate_limits": { 00:04:40.679 "rw_ios_per_sec": 0, 00:04:40.679 "rw_mbytes_per_sec": 0, 00:04:40.679 "r_mbytes_per_sec": 0, 00:04:40.679 "w_mbytes_per_sec": 0 00:04:40.679 }, 00:04:40.679 "claimed": false, 00:04:40.679 "zoned": false, 00:04:40.679 "supported_io_types": { 00:04:40.679 "read": true, 00:04:40.679 "write": true, 00:04:40.679 "unmap": true, 00:04:40.679 "flush": true, 00:04:40.679 "reset": true, 00:04:40.679 "nvme_admin": false, 00:04:40.679 "nvme_io": false, 00:04:40.679 "nvme_io_md": false, 00:04:40.679 "write_zeroes": true, 00:04:40.679 "zcopy": true, 00:04:40.679 "get_zone_info": false, 00:04:40.679 "zone_management": false, 00:04:40.679 "zone_append": false, 00:04:40.679 "compare": false, 00:04:40.679 "compare_and_write": false, 00:04:40.679 "abort": true, 00:04:40.679 "seek_hole": false, 00:04:40.679 "seek_data": false, 00:04:40.679 "copy": true, 00:04:40.679 "nvme_iov_md": false 00:04:40.679 }, 00:04:40.679 "memory_domains": [ 00:04:40.679 { 00:04:40.679 "dma_device_id": "system", 00:04:40.679 "dma_device_type": 1 00:04:40.679 }, 00:04:40.679 { 00:04:40.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.679 "dma_device_type": 2 00:04:40.679 } 00:04:40.679 ], 00:04:40.679 "driver_specific": {} 00:04:40.679 } 00:04:40.679 ]' 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.679 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.679 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.679 [2024-07-26 13:46:07.967797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.679 [2024-07-26 13:46:07.967823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.679 [2024-07-26 13:46:07.967835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14082d0 00:04:40.680 [2024-07-26 13:46:07.967841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.680 [2024-07-26 13:46:07.969100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.680 [2024-07-26 13:46:07.969120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.680 Passthru0 00:04:40.680 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.680 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.680 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.680 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 13:46:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.680 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.680 { 00:04:40.680 "name": "Malloc0", 00:04:40.680 "aliases": [ 00:04:40.680 "e7e3ecfb-b3cd-43fe-a031-60201df4df04" 00:04:40.680 ], 00:04:40.680 "product_name": "Malloc disk", 00:04:40.680 "block_size": 512, 00:04:40.680 "num_blocks": 16384, 00:04:40.680 "uuid": "e7e3ecfb-b3cd-43fe-a031-60201df4df04", 00:04:40.680 "assigned_rate_limits": { 00:04:40.680 "rw_ios_per_sec": 0, 00:04:40.680 "rw_mbytes_per_sec": 0, 00:04:40.680 "r_mbytes_per_sec": 0, 00:04:40.680 "w_mbytes_per_sec": 0 00:04:40.680 }, 00:04:40.680 "claimed": true, 00:04:40.680 "claim_type": "exclusive_write", 00:04:40.680 "zoned": false, 00:04:40.680 "supported_io_types": { 00:04:40.680 "read": true, 00:04:40.680 "write": true, 00:04:40.680 "unmap": true, 00:04:40.680 "flush": true, 00:04:40.680 "reset": true, 00:04:40.680 "nvme_admin": false, 00:04:40.680 "nvme_io": false, 00:04:40.680 "nvme_io_md": false, 00:04:40.680 "write_zeroes": true, 00:04:40.680 "zcopy": true, 00:04:40.680 "get_zone_info": false, 00:04:40.680 "zone_management": false, 00:04:40.680 "zone_append": false, 00:04:40.680 "compare": false, 00:04:40.680 "compare_and_write": false, 00:04:40.680 "abort": true, 00:04:40.680 "seek_hole": false, 00:04:40.680 "seek_data": false, 00:04:40.680 "copy": true, 00:04:40.680 "nvme_iov_md": false 00:04:40.680 }, 00:04:40.680 "memory_domains": [ 00:04:40.680 { 00:04:40.680 "dma_device_id": "system", 00:04:40.680 "dma_device_type": 1 00:04:40.680 }, 00:04:40.680 { 00:04:40.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.680 "dma_device_type": 2 00:04:40.680 } 00:04:40.680 ], 00:04:40.680 "driver_specific": {} 00:04:40.680 }, 00:04:40.680 { 00:04:40.680 "name": "Passthru0", 00:04:40.680 "aliases": [ 00:04:40.680 "7ec538d7-d27f-56f1-a0bc-e4cee163e01b" 00:04:40.680 ], 00:04:40.680 "product_name": "passthru", 00:04:40.680 "block_size": 512, 00:04:40.680 "num_blocks": 16384, 00:04:40.680 "uuid": "7ec538d7-d27f-56f1-a0bc-e4cee163e01b", 00:04:40.680 "assigned_rate_limits": { 00:04:40.680 "rw_ios_per_sec": 0, 00:04:40.680 "rw_mbytes_per_sec": 0, 00:04:40.680 "r_mbytes_per_sec": 0, 00:04:40.680 "w_mbytes_per_sec": 0 00:04:40.680 }, 00:04:40.680 "claimed": false, 00:04:40.680 "zoned": false, 00:04:40.680 "supported_io_types": { 00:04:40.680 "read": true, 00:04:40.680 "write": true, 00:04:40.680 "unmap": true, 00:04:40.680 "flush": true, 00:04:40.680 "reset": true, 00:04:40.680 "nvme_admin": false, 00:04:40.680 "nvme_io": false, 00:04:40.680 "nvme_io_md": false, 00:04:40.680 "write_zeroes": true, 00:04:40.680 "zcopy": true, 00:04:40.680 "get_zone_info": false, 00:04:40.680 "zone_management": false, 00:04:40.680 "zone_append": false, 00:04:40.680 "compare": false, 00:04:40.680 "compare_and_write": false, 00:04:40.680 "abort": true, 00:04:40.680 "seek_hole": false, 00:04:40.680 "seek_data": false, 00:04:40.680 "copy": true, 00:04:40.680 "nvme_iov_md": false 00:04:40.680 }, 00:04:40.680 "memory_domains": [ 00:04:40.680 { 00:04:40.680 "dma_device_id": "system", 00:04:40.680 "dma_device_type": 1 00:04:40.680 }, 00:04:40.680 { 00:04:40.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.680 "dma_device_type": 2 00:04:40.680 } 00:04:40.680 ], 00:04:40.680 "driver_specific": { 00:04:40.680 "passthru": { 00:04:40.680 "name": "Passthru0", 00:04:40.680 "base_bdev_name": "Malloc0" 00:04:40.680 } 00:04:40.680 } 00:04:40.680 } 00:04:40.680 ]' 00:04:40.680 13:46:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.680 13:46:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.680 00:04:40.680 real 0m0.268s 00:04:40.680 user 0m0.171s 00:04:40.680 sys 0m0.033s 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.680 13:46:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 ************************************ 00:04:40.680 END TEST rpc_integrity 00:04:40.680 ************************************ 00:04:40.939 13:46:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.939 13:46:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.939 13:46:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.939 13:46:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.939 ************************************ 00:04:40.939 START TEST rpc_plugins 00:04:40.939 ************************************ 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.939 { 00:04:40.939 "name": "Malloc1", 00:04:40.939 "aliases": [ 00:04:40.939 "9ca8e3d9-2b34-479a-bd25-a71f05fe68a3" 00:04:40.939 ], 00:04:40.939 "product_name": "Malloc disk", 00:04:40.939 "block_size": 4096, 00:04:40.939 "num_blocks": 256, 00:04:40.939 "uuid": "9ca8e3d9-2b34-479a-bd25-a71f05fe68a3", 00:04:40.939 "assigned_rate_limits": { 00:04:40.939 "rw_ios_per_sec": 0, 00:04:40.939 "rw_mbytes_per_sec": 0, 00:04:40.939 "r_mbytes_per_sec": 0, 00:04:40.939 "w_mbytes_per_sec": 0 00:04:40.939 }, 00:04:40.939 "claimed": false, 00:04:40.939 "zoned": false, 00:04:40.939 "supported_io_types": { 00:04:40.939 "read": true, 00:04:40.939 "write": true, 00:04:40.939 "unmap": true, 00:04:40.939 "flush": true, 00:04:40.939 "reset": true, 00:04:40.939 "nvme_admin": false, 00:04:40.939 "nvme_io": false, 00:04:40.939 "nvme_io_md": false, 00:04:40.939 "write_zeroes": true, 00:04:40.939 "zcopy": true, 00:04:40.939 "get_zone_info": false, 00:04:40.939 "zone_management": false, 00:04:40.939 "zone_append": false, 00:04:40.939 "compare": false, 00:04:40.939 "compare_and_write": false, 00:04:40.939 "abort": true, 00:04:40.939 "seek_hole": false, 00:04:40.939 "seek_data": false, 00:04:40.939 "copy": true, 00:04:40.939 "nvme_iov_md": false 00:04:40.939 }, 00:04:40.939 "memory_domains": [ 00:04:40.939 { 00:04:40.939 "dma_device_id": "system", 00:04:40.939 "dma_device_type": 1 00:04:40.939 }, 00:04:40.939 { 00:04:40.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.939 "dma_device_type": 2 00:04:40.939 } 00:04:40.939 ], 00:04:40.939 "driver_specific": {} 00:04:40.939 } 00:04:40.939 ]' 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.939 13:46:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.939 00:04:40.939 real 0m0.143s 00:04:40.939 user 0m0.088s 00:04:40.939 sys 0m0.019s 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.939 13:46:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.939 ************************************ 00:04:40.939 END TEST rpc_plugins 00:04:40.939 ************************************ 00:04:40.939 13:46:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.939 13:46:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.939 13:46:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.939 13:46:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 ************************************ 00:04:41.198 START TEST rpc_trace_cmd_test 00:04:41.198 ************************************ 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.198 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.198 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2780292", 00:04:41.198 "tpoint_group_mask": "0x8", 00:04:41.198 "iscsi_conn": { 00:04:41.198 "mask": "0x2", 00:04:41.198 "tpoint_mask": "0x0" 00:04:41.198 }, 00:04:41.198 "scsi": { 00:04:41.199 "mask": "0x4", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "bdev": { 00:04:41.199 "mask": "0x8", 00:04:41.199 "tpoint_mask": "0xffffffffffffffff" 00:04:41.199 }, 00:04:41.199 "nvmf_rdma": { 00:04:41.199 "mask": "0x10", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "nvmf_tcp": { 00:04:41.199 "mask": "0x20", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "ftl": { 00:04:41.199 "mask": "0x40", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "blobfs": { 00:04:41.199 "mask": "0x80", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "dsa": { 00:04:41.199 "mask": "0x200", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "thread": { 00:04:41.199 "mask": "0x400", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "nvme_pcie": { 00:04:41.199 "mask": "0x800", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "iaa": { 00:04:41.199 "mask": "0x1000", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "nvme_tcp": { 00:04:41.199 "mask": "0x2000", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "bdev_nvme": { 00:04:41.199 "mask": "0x4000", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 }, 00:04:41.199 "sock": { 00:04:41.199 "mask": "0x8000", 00:04:41.199 "tpoint_mask": "0x0" 00:04:41.199 } 00:04:41.199 }' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.199 00:04:41.199 real 0m0.222s 00:04:41.199 user 0m0.183s 00:04:41.199 sys 0m0.029s 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.199 13:46:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 ************************************ 00:04:41.199 END TEST rpc_trace_cmd_test 00:04:41.199 ************************************ 00:04:41.458 13:46:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.458 13:46:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.458 13:46:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.458 13:46:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.458 13:46:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.458 13:46:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.458 ************************************ 00:04:41.458 START TEST rpc_daemon_integrity 00:04:41.458 ************************************ 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.458 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.459 { 00:04:41.459 "name": "Malloc2", 00:04:41.459 "aliases": [ 00:04:41.459 "33ff8627-6386-4cef-b889-147dc17945b1" 00:04:41.459 ], 00:04:41.459 "product_name": "Malloc disk", 00:04:41.459 "block_size": 512, 00:04:41.459 "num_blocks": 16384, 00:04:41.459 "uuid": "33ff8627-6386-4cef-b889-147dc17945b1", 00:04:41.459 "assigned_rate_limits": { 00:04:41.459 "rw_ios_per_sec": 0, 00:04:41.459 "rw_mbytes_per_sec": 0, 00:04:41.459 "r_mbytes_per_sec": 0, 00:04:41.459 "w_mbytes_per_sec": 0 00:04:41.459 }, 00:04:41.459 "claimed": false, 00:04:41.459 "zoned": false, 00:04:41.459 "supported_io_types": { 00:04:41.459 "read": true, 00:04:41.459 "write": true, 00:04:41.459 "unmap": true, 00:04:41.459 "flush": true, 00:04:41.459 "reset": true, 00:04:41.459 "nvme_admin": false, 00:04:41.459 "nvme_io": false, 00:04:41.459 "nvme_io_md": false, 00:04:41.459 "write_zeroes": true, 00:04:41.459 "zcopy": true, 00:04:41.459 "get_zone_info": false, 00:04:41.459 "zone_management": false, 00:04:41.459 "zone_append": false, 00:04:41.459 "compare": false, 00:04:41.459 "compare_and_write": false, 00:04:41.459 "abort": true, 00:04:41.459 "seek_hole": false, 00:04:41.459 "seek_data": false, 00:04:41.459 "copy": true, 00:04:41.459 "nvme_iov_md": false 00:04:41.459 }, 00:04:41.459 "memory_domains": [ 00:04:41.459 { 00:04:41.459 "dma_device_id": "system", 00:04:41.459 "dma_device_type": 1 00:04:41.459 }, 00:04:41.459 { 00:04:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.459 "dma_device_type": 2 00:04:41.459 } 00:04:41.459 ], 00:04:41.459 "driver_specific": {} 00:04:41.459 } 00:04:41.459 ]' 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 [2024-07-26 13:46:08.806103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.459 [2024-07-26 13:46:08.806133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.459 [2024-07-26 13:46:08.806145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x159fac0 00:04:41.459 [2024-07-26 13:46:08.806152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.459 [2024-07-26 13:46:08.807110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.459 [2024-07-26 13:46:08.807130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.459 Passthru0 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.459 { 00:04:41.459 "name": "Malloc2", 00:04:41.459 "aliases": [ 00:04:41.459 "33ff8627-6386-4cef-b889-147dc17945b1" 00:04:41.459 ], 00:04:41.459 "product_name": "Malloc disk", 00:04:41.459 "block_size": 512, 00:04:41.459 "num_blocks": 16384, 00:04:41.459 "uuid": "33ff8627-6386-4cef-b889-147dc17945b1", 00:04:41.459 "assigned_rate_limits": { 00:04:41.459 "rw_ios_per_sec": 0, 00:04:41.459 "rw_mbytes_per_sec": 0, 00:04:41.459 "r_mbytes_per_sec": 0, 00:04:41.459 "w_mbytes_per_sec": 0 00:04:41.459 }, 00:04:41.459 "claimed": true, 00:04:41.459 "claim_type": "exclusive_write", 00:04:41.459 "zoned": false, 00:04:41.459 "supported_io_types": { 00:04:41.459 "read": true, 00:04:41.459 "write": true, 00:04:41.459 "unmap": true, 00:04:41.459 "flush": true, 00:04:41.459 "reset": true, 00:04:41.459 "nvme_admin": false, 00:04:41.459 "nvme_io": false, 00:04:41.459 "nvme_io_md": false, 00:04:41.459 "write_zeroes": true, 00:04:41.459 "zcopy": true, 00:04:41.459 "get_zone_info": false, 00:04:41.459 "zone_management": false, 00:04:41.459 "zone_append": false, 00:04:41.459 "compare": false, 00:04:41.459 "compare_and_write": false, 00:04:41.459 "abort": true, 00:04:41.459 "seek_hole": false, 00:04:41.459 "seek_data": false, 00:04:41.459 "copy": true, 00:04:41.459 "nvme_iov_md": false 00:04:41.459 }, 00:04:41.459 "memory_domains": [ 00:04:41.459 { 00:04:41.459 "dma_device_id": "system", 00:04:41.459 "dma_device_type": 1 00:04:41.459 }, 00:04:41.459 { 00:04:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.459 "dma_device_type": 2 00:04:41.459 } 00:04:41.459 ], 00:04:41.459 "driver_specific": {} 00:04:41.459 }, 00:04:41.459 { 00:04:41.459 "name": "Passthru0", 00:04:41.459 "aliases": [ 00:04:41.459 "d85bd9c1-80de-534c-a964-daf38d7fcdbc" 00:04:41.459 ], 00:04:41.459 "product_name": "passthru", 00:04:41.459 "block_size": 512, 00:04:41.459 "num_blocks": 16384, 00:04:41.459 "uuid": "d85bd9c1-80de-534c-a964-daf38d7fcdbc", 00:04:41.459 "assigned_rate_limits": { 00:04:41.459 "rw_ios_per_sec": 0, 00:04:41.459 "rw_mbytes_per_sec": 0, 00:04:41.459 "r_mbytes_per_sec": 0, 00:04:41.459 "w_mbytes_per_sec": 0 00:04:41.459 }, 00:04:41.459 "claimed": false, 00:04:41.459 "zoned": false, 00:04:41.459 "supported_io_types": { 00:04:41.459 "read": true, 00:04:41.459 "write": true, 00:04:41.459 "unmap": true, 00:04:41.459 "flush": true, 00:04:41.459 "reset": true, 00:04:41.459 "nvme_admin": false, 00:04:41.459 "nvme_io": false, 00:04:41.459 "nvme_io_md": false, 00:04:41.459 "write_zeroes": true, 00:04:41.459 "zcopy": true, 00:04:41.459 "get_zone_info": false, 00:04:41.459 "zone_management": false, 00:04:41.459 "zone_append": false, 00:04:41.459 "compare": false, 00:04:41.459 "compare_and_write": false, 00:04:41.459 "abort": true, 00:04:41.459 "seek_hole": false, 00:04:41.459 "seek_data": false, 00:04:41.459 "copy": true, 00:04:41.459 "nvme_iov_md": false 00:04:41.459 }, 00:04:41.459 "memory_domains": [ 00:04:41.459 { 00:04:41.459 "dma_device_id": "system", 00:04:41.459 "dma_device_type": 1 00:04:41.459 }, 00:04:41.459 { 00:04:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.459 "dma_device_type": 2 00:04:41.459 } 00:04:41.459 ], 00:04:41.459 "driver_specific": { 00:04:41.459 "passthru": { 00:04:41.459 "name": "Passthru0", 00:04:41.459 "base_bdev_name": "Malloc2" 00:04:41.459 } 00:04:41.459 } 00:04:41.459 } 00:04:41.459 ]' 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.718 00:04:41.718 real 0m0.279s 00:04:41.718 user 0m0.177s 00:04:41.718 sys 0m0.038s 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.718 13:46:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.718 ************************************ 00:04:41.718 END TEST rpc_daemon_integrity 00:04:41.718 ************************************ 00:04:41.718 13:46:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.718 13:46:08 rpc -- rpc/rpc.sh@84 -- # killprocess 2780292 00:04:41.718 13:46:08 rpc -- common/autotest_common.sh@950 -- # '[' -z 2780292 ']' 00:04:41.718 13:46:08 rpc -- common/autotest_common.sh@954 -- # kill -0 2780292 00:04:41.718 13:46:08 rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.718 13:46:08 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.718 13:46:08 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2780292 00:04:41.718 13:46:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.718 13:46:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.718 13:46:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2780292' 00:04:41.718 killing process with pid 2780292 00:04:41.718 13:46:09 rpc -- common/autotest_common.sh@969 -- # kill 2780292 00:04:41.718 13:46:09 rpc -- common/autotest_common.sh@974 -- # wait 2780292 00:04:41.978 00:04:41.978 real 0m2.475s 00:04:41.978 user 0m3.202s 00:04:41.978 sys 0m0.675s 00:04:41.978 13:46:09 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.978 13:46:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.978 ************************************ 00:04:41.978 END TEST rpc 00:04:41.978 ************************************ 00:04:41.978 13:46:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.978 13:46:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.978 13:46:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.978 13:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:41.978 ************************************ 00:04:41.978 START TEST skip_rpc 00:04:41.978 ************************************ 00:04:41.978 13:46:09 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.237 * Looking for test storage... 00:04:42.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.237 13:46:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.237 13:46:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:42.237 13:46:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:42.237 13:46:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.237 13:46:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.237 13:46:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.237 ************************************ 00:04:42.237 START TEST skip_rpc 00:04:42.237 ************************************ 00:04:42.237 13:46:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:42.237 13:46:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2780922 00:04:42.237 13:46:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:42.237 13:46:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.237 13:46:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:42.237 [2024-07-26 13:46:09.572319] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:42.237 [2024-07-26 13:46:09.572357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780922 ] 00:04:42.237 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.237 [2024-07-26 13:46:09.626103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.496 [2024-07-26 13:46:09.699834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2780922 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2780922 ']' 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2780922 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2780922 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2780922' 00:04:47.769 killing process with pid 2780922 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2780922 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2780922 00:04:47.769 00:04:47.769 real 0m5.364s 00:04:47.769 user 0m5.141s 00:04:47.769 sys 0m0.254s 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.769 13:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 ************************************ 00:04:47.769 END TEST skip_rpc 00:04:47.769 ************************************ 00:04:47.769 13:46:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.769 13:46:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.769 13:46:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.769 13:46:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 ************************************ 00:04:47.769 START TEST skip_rpc_with_json 00:04:47.769 ************************************ 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2781870 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2781870 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2781870 ']' 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.769 13:46:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 [2024-07-26 13:46:15.012828] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:47.769 [2024-07-26 13:46:15.012869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781870 ] 00:04:47.769 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.769 [2024-07-26 13:46:15.067265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.769 [2024-07-26 13:46:15.147860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.707 [2024-07-26 13:46:15.816059] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.707 request: 00:04:48.707 { 00:04:48.707 "trtype": "tcp", 00:04:48.707 "method": "nvmf_get_transports", 00:04:48.707 "req_id": 1 00:04:48.707 } 00:04:48.707 Got JSON-RPC error response 00:04:48.707 response: 00:04:48.707 { 00:04:48.707 "code": -19, 00:04:48.707 "message": "No such device" 00:04:48.707 } 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.707 [2024-07-26 13:46:15.828161] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.707 13:46:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.707 { 00:04:48.707 "subsystems": [ 00:04:48.707 { 00:04:48.707 "subsystem": "vfio_user_target", 00:04:48.707 "config": null 00:04:48.707 }, 00:04:48.707 { 00:04:48.707 "subsystem": "keyring", 00:04:48.707 "config": [] 00:04:48.707 }, 00:04:48.707 { 00:04:48.707 "subsystem": "iobuf", 00:04:48.707 "config": [ 00:04:48.707 { 00:04:48.707 "method": "iobuf_set_options", 00:04:48.707 "params": { 00:04:48.707 "small_pool_count": 8192, 00:04:48.707 "large_pool_count": 1024, 00:04:48.707 "small_bufsize": 8192, 00:04:48.707 "large_bufsize": 135168 00:04:48.707 } 00:04:48.707 } 00:04:48.707 ] 00:04:48.707 }, 00:04:48.707 { 00:04:48.707 "subsystem": "sock", 00:04:48.707 "config": [ 00:04:48.707 { 00:04:48.707 "method": "sock_set_default_impl", 00:04:48.707 "params": { 00:04:48.707 "impl_name": "posix" 00:04:48.707 } 00:04:48.707 }, 00:04:48.707 { 00:04:48.708 "method": "sock_impl_set_options", 00:04:48.708 "params": { 00:04:48.708 "impl_name": "ssl", 00:04:48.708 "recv_buf_size": 4096, 00:04:48.708 "send_buf_size": 4096, 00:04:48.708 "enable_recv_pipe": true, 00:04:48.708 "enable_quickack": false, 00:04:48.708 "enable_placement_id": 0, 00:04:48.708 "enable_zerocopy_send_server": true, 00:04:48.708 "enable_zerocopy_send_client": false, 00:04:48.708 "zerocopy_threshold": 0, 00:04:48.708 "tls_version": 0, 00:04:48.708 "enable_ktls": false 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "sock_impl_set_options", 00:04:48.708 "params": { 00:04:48.708 "impl_name": "posix", 00:04:48.708 "recv_buf_size": 2097152, 00:04:48.708 "send_buf_size": 2097152, 00:04:48.708 "enable_recv_pipe": true, 00:04:48.708 "enable_quickack": false, 00:04:48.708 "enable_placement_id": 0, 00:04:48.708 "enable_zerocopy_send_server": true, 00:04:48.708 "enable_zerocopy_send_client": false, 00:04:48.708 "zerocopy_threshold": 0, 00:04:48.708 "tls_version": 0, 00:04:48.708 "enable_ktls": false 00:04:48.708 } 00:04:48.708 } 00:04:48.708 ] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "vmd", 00:04:48.708 "config": [] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "accel", 00:04:48.708 "config": [ 00:04:48.708 { 00:04:48.708 "method": "accel_set_options", 00:04:48.708 "params": { 00:04:48.708 "small_cache_size": 128, 00:04:48.708 "large_cache_size": 16, 00:04:48.708 "task_count": 2048, 00:04:48.708 "sequence_count": 2048, 00:04:48.708 "buf_count": 2048 00:04:48.708 } 00:04:48.708 } 00:04:48.708 ] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "bdev", 00:04:48.708 "config": [ 00:04:48.708 { 00:04:48.708 "method": "bdev_set_options", 00:04:48.708 "params": { 00:04:48.708 "bdev_io_pool_size": 65535, 00:04:48.708 "bdev_io_cache_size": 256, 00:04:48.708 "bdev_auto_examine": true, 00:04:48.708 "iobuf_small_cache_size": 128, 00:04:48.708 "iobuf_large_cache_size": 16 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "bdev_raid_set_options", 00:04:48.708 "params": { 00:04:48.708 "process_window_size_kb": 1024, 00:04:48.708 "process_max_bandwidth_mb_sec": 0 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "bdev_iscsi_set_options", 00:04:48.708 "params": { 00:04:48.708 "timeout_sec": 30 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "bdev_nvme_set_options", 00:04:48.708 "params": { 00:04:48.708 "action_on_timeout": "none", 00:04:48.708 "timeout_us": 0, 00:04:48.708 "timeout_admin_us": 0, 00:04:48.708 "keep_alive_timeout_ms": 10000, 00:04:48.708 "arbitration_burst": 0, 00:04:48.708 "low_priority_weight": 0, 00:04:48.708 "medium_priority_weight": 0, 00:04:48.708 "high_priority_weight": 0, 00:04:48.708 "nvme_adminq_poll_period_us": 10000, 00:04:48.708 "nvme_ioq_poll_period_us": 0, 00:04:48.708 "io_queue_requests": 0, 00:04:48.708 "delay_cmd_submit": true, 00:04:48.708 "transport_retry_count": 4, 00:04:48.708 "bdev_retry_count": 3, 00:04:48.708 "transport_ack_timeout": 0, 00:04:48.708 "ctrlr_loss_timeout_sec": 0, 00:04:48.708 "reconnect_delay_sec": 0, 00:04:48.708 "fast_io_fail_timeout_sec": 0, 00:04:48.708 "disable_auto_failback": false, 00:04:48.708 "generate_uuids": false, 00:04:48.708 "transport_tos": 0, 00:04:48.708 "nvme_error_stat": false, 00:04:48.708 "rdma_srq_size": 0, 00:04:48.708 "io_path_stat": false, 00:04:48.708 "allow_accel_sequence": false, 00:04:48.708 "rdma_max_cq_size": 0, 00:04:48.708 "rdma_cm_event_timeout_ms": 0, 00:04:48.708 "dhchap_digests": [ 00:04:48.708 "sha256", 00:04:48.708 "sha384", 00:04:48.708 "sha512" 00:04:48.708 ], 00:04:48.708 "dhchap_dhgroups": [ 00:04:48.708 "null", 00:04:48.708 "ffdhe2048", 00:04:48.708 "ffdhe3072", 00:04:48.708 "ffdhe4096", 00:04:48.708 "ffdhe6144", 00:04:48.708 "ffdhe8192" 00:04:48.708 ] 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "bdev_nvme_set_hotplug", 00:04:48.708 "params": { 00:04:48.708 "period_us": 100000, 00:04:48.708 "enable": false 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "bdev_wait_for_examine" 00:04:48.708 } 00:04:48.708 ] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "scsi", 00:04:48.708 "config": null 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "scheduler", 00:04:48.708 "config": [ 00:04:48.708 { 00:04:48.708 "method": "framework_set_scheduler", 00:04:48.708 "params": { 00:04:48.708 "name": "static" 00:04:48.708 } 00:04:48.708 } 00:04:48.708 ] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "vhost_scsi", 00:04:48.708 "config": [] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "vhost_blk", 00:04:48.708 "config": [] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "ublk", 00:04:48.708 "config": [] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "nbd", 00:04:48.708 "config": [] 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "subsystem": "nvmf", 00:04:48.708 "config": [ 00:04:48.708 { 00:04:48.708 "method": "nvmf_set_config", 00:04:48.708 "params": { 00:04:48.708 "discovery_filter": "match_any", 00:04:48.708 "admin_cmd_passthru": { 00:04:48.708 "identify_ctrlr": false 00:04:48.708 } 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "nvmf_set_max_subsystems", 00:04:48.708 "params": { 00:04:48.708 "max_subsystems": 1024 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "nvmf_set_crdt", 00:04:48.708 "params": { 00:04:48.708 "crdt1": 0, 00:04:48.708 "crdt2": 0, 00:04:48.708 "crdt3": 0 00:04:48.708 } 00:04:48.708 }, 00:04:48.708 { 00:04:48.708 "method": "nvmf_create_transport", 00:04:48.708 "params": { 00:04:48.708 "trtype": "TCP", 00:04:48.708 "max_queue_depth": 128, 00:04:48.708 "max_io_qpairs_per_ctrlr": 127, 00:04:48.708 "in_capsule_data_size": 4096, 00:04:48.708 "max_io_size": 131072, 00:04:48.708 "io_unit_size": 131072, 00:04:48.708 "max_aq_depth": 128, 00:04:48.708 "num_shared_buffers": 511, 00:04:48.708 "buf_cache_size": 4294967295, 00:04:48.708 "dif_insert_or_strip": false, 00:04:48.708 "zcopy": false, 00:04:48.708 "c2h_success": true, 00:04:48.708 "sock_priority": 0, 00:04:48.708 "abort_timeout_sec": 1, 00:04:48.708 "ack_timeout": 0, 00:04:48.708 "data_wr_pool_size": 0 00:04:48.709 } 00:04:48.709 } 00:04:48.709 ] 00:04:48.709 }, 00:04:48.709 { 00:04:48.709 "subsystem": "iscsi", 00:04:48.709 "config": [ 00:04:48.709 { 00:04:48.709 "method": "iscsi_set_options", 00:04:48.709 "params": { 00:04:48.709 "node_base": "iqn.2016-06.io.spdk", 00:04:48.709 "max_sessions": 128, 00:04:48.709 "max_connections_per_session": 2, 00:04:48.709 "max_queue_depth": 64, 00:04:48.709 "default_time2wait": 2, 00:04:48.709 "default_time2retain": 20, 00:04:48.709 "first_burst_length": 8192, 00:04:48.709 "immediate_data": true, 00:04:48.709 "allow_duplicated_isid": false, 00:04:48.709 "error_recovery_level": 0, 00:04:48.709 "nop_timeout": 60, 00:04:48.709 "nop_in_interval": 30, 00:04:48.709 "disable_chap": false, 00:04:48.709 "require_chap": false, 00:04:48.709 "mutual_chap": false, 00:04:48.709 "chap_group": 0, 00:04:48.709 "max_large_datain_per_connection": 64, 00:04:48.709 "max_r2t_per_connection": 4, 00:04:48.709 "pdu_pool_size": 36864, 00:04:48.709 "immediate_data_pool_size": 16384, 00:04:48.709 "data_out_pool_size": 2048 00:04:48.709 } 00:04:48.709 } 00:04:48.709 ] 00:04:48.709 } 00:04:48.709 ] 00:04:48.709 } 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2781870 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2781870 ']' 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2781870 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.709 13:46:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2781870 00:04:48.709 13:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.709 13:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.709 13:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2781870' 00:04:48.709 killing process with pid 2781870 00:04:48.709 13:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2781870 00:04:48.709 13:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2781870 00:04:48.969 13:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2782110 00:04:48.969 13:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.969 13:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2782110 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2782110 ']' 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2782110 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2782110 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2782110' 00:04:54.242 killing process with pid 2782110 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2782110 00:04:54.242 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2782110 00:04:54.501 13:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.501 13:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.501 00:04:54.501 real 0m6.747s 00:04:54.501 user 0m6.592s 00:04:54.501 sys 0m0.584s 00:04:54.501 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.501 13:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.501 ************************************ 00:04:54.501 END TEST skip_rpc_with_json 00:04:54.501 ************************************ 00:04:54.501 13:46:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.501 13:46:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.502 13:46:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.502 13:46:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.502 ************************************ 00:04:54.502 START TEST skip_rpc_with_delay 00:04:54.502 ************************************ 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.502 [2024-07-26 13:46:21.831117] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.502 [2024-07-26 13:46:21.831180] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.502 00:04:54.502 real 0m0.067s 00:04:54.502 user 0m0.043s 00:04:54.502 sys 0m0.024s 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.502 13:46:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.502 ************************************ 00:04:54.502 END TEST skip_rpc_with_delay 00:04:54.502 ************************************ 00:04:54.502 13:46:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.502 13:46:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.502 13:46:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.502 13:46:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.502 13:46:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.502 13:46:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.502 ************************************ 00:04:54.502 START TEST exit_on_failed_rpc_init 00:04:54.502 ************************************ 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2783080 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2783080 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2783080 ']' 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.502 13:46:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.761 [2024-07-26 13:46:21.967070] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:54.761 [2024-07-26 13:46:21.967112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783080 ] 00:04:54.761 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.761 [2024-07-26 13:46:22.020739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.761 [2024-07-26 13:46:22.100422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.334 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.334 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.635 13:46:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.635 [2024-07-26 13:46:22.823393] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:55.635 [2024-07-26 13:46:22.823438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783312 ] 00:04:55.635 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.635 [2024-07-26 13:46:22.876099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.635 [2024-07-26 13:46:22.950296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.635 [2024-07-26 13:46:22.950362] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.635 [2024-07-26 13:46:22.950371] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.635 [2024-07-26 13:46:22.950377] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2783080 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2783080 ']' 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2783080 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.635 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2783080 00:04:55.904 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.904 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.904 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2783080' 00:04:55.904 killing process with pid 2783080 00:04:55.904 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2783080 00:04:55.904 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2783080 00:04:56.163 00:04:56.163 real 0m1.461s 00:04:56.163 user 0m1.689s 00:04:56.163 sys 0m0.391s 00:04:56.163 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.163 13:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.163 ************************************ 00:04:56.163 END TEST exit_on_failed_rpc_init 00:04:56.163 ************************************ 00:04:56.163 13:46:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.163 00:04:56.163 real 0m14.009s 00:04:56.163 user 0m13.611s 00:04:56.163 sys 0m1.501s 00:04:56.163 13:46:23 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.163 13:46:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.163 ************************************ 00:04:56.163 END TEST skip_rpc 00:04:56.163 ************************************ 00:04:56.163 13:46:23 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.163 13:46:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.163 13:46:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.163 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:56.163 ************************************ 00:04:56.163 START TEST rpc_client 00:04:56.163 ************************************ 00:04:56.163 13:46:23 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.163 * Looking for test storage... 00:04:56.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:56.163 13:46:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:56.163 OK 00:04:56.163 13:46:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.163 00:04:56.163 real 0m0.101s 00:04:56.163 user 0m0.055s 00:04:56.163 sys 0m0.053s 00:04:56.163 13:46:23 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.163 13:46:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.163 ************************************ 00:04:56.163 END TEST rpc_client 00:04:56.163 ************************************ 00:04:56.422 13:46:23 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.422 13:46:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.422 13:46:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.422 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:56.422 ************************************ 00:04:56.422 START TEST json_config 00:04:56.422 ************************************ 00:04:56.422 13:46:23 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.422 13:46:23 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.422 13:46:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.423 13:46:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.423 13:46:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.423 13:46:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.423 13:46:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.423 13:46:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.423 13:46:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.423 13:46:23 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.423 13:46:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@47 -- # : 0 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.423 13:46:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:56.423 INFO: JSON configuration test init 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.423 13:46:23 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.423 13:46:23 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.423 13:46:23 json_config -- json_config/common.sh@10 -- # shift 00:04:56.423 13:46:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.423 13:46:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.423 13:46:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.423 13:46:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.423 13:46:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.423 13:46:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2783480 00:04:56.423 13:46:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.423 Waiting for target to run... 00:04:56.423 13:46:23 json_config -- json_config/common.sh@25 -- # waitforlisten 2783480 /var/tmp/spdk_tgt.sock 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 2783480 ']' 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.423 13:46:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.423 13:46:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.423 [2024-07-26 13:46:23.806433] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:04:56.423 [2024-07-26 13:46:23.806481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783480 ] 00:04:56.423 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.991 [2024-07-26 13:46:24.237981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.991 [2024-07-26 13:46:24.329087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.250 13:46:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.250 13:46:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:57.250 13:46:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.250 00:04:57.250 13:46:24 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:57.250 13:46:24 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:57.250 13:46:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.250 13:46:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.250 13:46:24 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:57.250 13:46:24 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:57.250 13:46:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.250 13:46:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.250 13:46:24 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.250 13:46:24 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:57.250 13:46:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:00.536 13:46:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.536 13:46:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:00.536 13:46:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@51 -- # sort 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:00.536 13:46:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.536 13:46:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:00.536 13:46:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.536 13:46:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:00.536 13:46:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.536 13:46:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.795 MallocForNvmf0 00:05:00.795 13:46:28 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.795 13:46:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.053 MallocForNvmf1 00:05:01.053 13:46:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.053 13:46:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.053 [2024-07-26 13:46:28.444870] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.053 13:46:28 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.053 13:46:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.312 13:46:28 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.312 13:46:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.570 13:46:28 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.570 13:46:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.570 13:46:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.570 13:46:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.828 [2024-07-26 13:46:29.114994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.828 13:46:29 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:01.828 13:46:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.828 13:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.828 13:46:29 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:01.828 13:46:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.828 13:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.828 13:46:29 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:01.828 13:46:29 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.828 13:46:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.087 MallocBdevForConfigChangeCheck 00:05:02.087 13:46:29 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:02.087 13:46:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.087 13:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.087 13:46:29 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:02.087 13:46:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.346 13:46:29 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:02.346 INFO: shutting down applications... 00:05:02.346 13:46:29 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:02.346 13:46:29 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:02.346 13:46:29 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:02.346 13:46:29 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:04.247 Calling clear_iscsi_subsystem 00:05:04.248 Calling clear_nvmf_subsystem 00:05:04.248 Calling clear_nbd_subsystem 00:05:04.248 Calling clear_ublk_subsystem 00:05:04.248 Calling clear_vhost_blk_subsystem 00:05:04.248 Calling clear_vhost_scsi_subsystem 00:05:04.248 Calling clear_bdev_subsystem 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@349 -- # break 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:04.248 13:46:31 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:04.248 13:46:31 json_config -- json_config/common.sh@31 -- # local app=target 00:05:04.248 13:46:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.248 13:46:31 json_config -- json_config/common.sh@35 -- # [[ -n 2783480 ]] 00:05:04.248 13:46:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2783480 00:05:04.248 13:46:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.248 13:46:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.248 13:46:31 json_config -- json_config/common.sh@41 -- # kill -0 2783480 00:05:04.248 13:46:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.817 13:46:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.817 13:46:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.817 13:46:32 json_config -- json_config/common.sh@41 -- # kill -0 2783480 00:05:04.817 13:46:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.817 13:46:32 json_config -- json_config/common.sh@43 -- # break 00:05:04.817 13:46:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.817 13:46:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.817 SPDK target shutdown done 00:05:04.817 13:46:32 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:04.817 INFO: relaunching applications... 00:05:04.817 13:46:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.817 13:46:32 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.817 13:46:32 json_config -- json_config/common.sh@10 -- # shift 00:05:04.817 13:46:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.817 13:46:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.817 13:46:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.817 13:46:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.817 13:46:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.817 13:46:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.817 13:46:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2785077 00:05:04.817 13:46:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.817 Waiting for target to run... 00:05:04.817 13:46:32 json_config -- json_config/common.sh@25 -- # waitforlisten 2785077 /var/tmp/spdk_tgt.sock 00:05:04.817 13:46:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 2785077 ']' 00:05:04.817 13:46:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.817 13:46:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.817 13:46:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.817 13:46:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.817 13:46:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 [2024-07-26 13:46:32.159164] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:04.817 [2024-07-26 13:46:32.159226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785077 ] 00:05:04.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.386 [2024-07-26 13:46:32.599327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.386 [2024-07-26 13:46:32.684871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.689 [2024-07-26 13:46:35.694779] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.689 [2024-07-26 13:46:35.727103] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.948 13:46:36 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.948 13:46:36 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:08.948 13:46:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.948 00:05:08.948 13:46:36 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:08.948 13:46:36 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:08.948 INFO: Checking if target configuration is the same... 00:05:08.948 13:46:36 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.948 13:46:36 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:08.948 13:46:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.948 + '[' 2 -ne 2 ']' 00:05:08.948 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:08.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:08.948 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.948 +++ basename /dev/fd/62 00:05:08.948 ++ mktemp /tmp/62.XXX 00:05:08.948 + tmp_file_1=/tmp/62.2uq 00:05:08.948 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:08.948 + tmp_file_2=/tmp/spdk_tgt_config.json.vRM 00:05:08.948 + ret=0 00:05:08.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.206 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.465 + diff -u /tmp/62.2uq /tmp/spdk_tgt_config.json.vRM 00:05:09.465 + echo 'INFO: JSON config files are the same' 00:05:09.465 INFO: JSON config files are the same 00:05:09.465 + rm /tmp/62.2uq /tmp/spdk_tgt_config.json.vRM 00:05:09.465 + exit 0 00:05:09.465 13:46:36 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:09.465 13:46:36 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.465 INFO: changing configuration and checking if this can be detected... 00:05:09.465 13:46:36 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.465 13:46:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.465 13:46:36 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:09.465 13:46:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.465 13:46:36 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.465 + '[' 2 -ne 2 ']' 00:05:09.465 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.465 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.465 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.465 +++ basename /dev/fd/62 00:05:09.465 ++ mktemp /tmp/62.XXX 00:05:09.465 + tmp_file_1=/tmp/62.IQs 00:05:09.465 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.465 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.465 + tmp_file_2=/tmp/spdk_tgt_config.json.AE5 00:05:09.465 + ret=0 00:05:09.465 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.724 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.983 + diff -u /tmp/62.IQs /tmp/spdk_tgt_config.json.AE5 00:05:09.983 + ret=1 00:05:09.983 + echo '=== Start of file: /tmp/62.IQs ===' 00:05:09.983 + cat /tmp/62.IQs 00:05:09.983 + echo '=== End of file: /tmp/62.IQs ===' 00:05:09.983 + echo '' 00:05:09.983 + echo '=== Start of file: /tmp/spdk_tgt_config.json.AE5 ===' 00:05:09.983 + cat /tmp/spdk_tgt_config.json.AE5 00:05:09.983 + echo '=== End of file: /tmp/spdk_tgt_config.json.AE5 ===' 00:05:09.983 + echo '' 00:05:09.983 + rm /tmp/62.IQs /tmp/spdk_tgt_config.json.AE5 00:05:09.983 + exit 1 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:09.983 INFO: configuration change detected. 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@321 -- # [[ -n 2785077 ]] 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.983 13:46:37 json_config -- json_config/json_config.sh@327 -- # killprocess 2785077 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@950 -- # '[' -z 2785077 ']' 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@954 -- # kill -0 2785077 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@955 -- # uname 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2785077 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2785077' 00:05:09.983 killing process with pid 2785077 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@969 -- # kill 2785077 00:05:09.983 13:46:37 json_config -- common/autotest_common.sh@974 -- # wait 2785077 00:05:11.359 13:46:38 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.359 13:46:38 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:11.359 13:46:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.359 13:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.618 13:46:38 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:11.618 13:46:38 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:11.618 INFO: Success 00:05:11.618 00:05:11.618 real 0m15.173s 00:05:11.618 user 0m15.741s 00:05:11.618 sys 0m2.071s 00:05:11.618 13:46:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.618 13:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.618 ************************************ 00:05:11.618 END TEST json_config 00:05:11.618 ************************************ 00:05:11.618 13:46:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:11.618 13:46:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.618 13:46:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.618 13:46:38 -- common/autotest_common.sh@10 -- # set +x 00:05:11.618 ************************************ 00:05:11.618 START TEST json_config_extra_key 00:05:11.618 ************************************ 00:05:11.618 13:46:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:11.618 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.618 13:46:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.618 13:46:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.618 13:46:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.618 13:46:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.618 13:46:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.618 13:46:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.619 13:46:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.619 13:46:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:11.619 13:46:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.619 13:46:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:11.619 INFO: launching applications... 00:05:11.619 13:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2786361 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.619 Waiting for target to run... 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2786361 /var/tmp/spdk_tgt.sock 00:05:11.619 13:46:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2786361 ']' 00:05:11.619 13:46:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:11.619 13:46:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.619 13:46:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.619 13:46:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.619 13:46:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.619 13:46:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.619 [2024-07-26 13:46:39.025856] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:11.619 [2024-07-26 13:46:39.025909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786361 ] 00:05:11.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.187 [2024-07-26 13:46:39.458059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.187 [2024-07-26 13:46:39.549903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.446 13:46:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.446 13:46:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:12.446 00:05:12.446 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:12.446 INFO: shutting down applications... 00:05:12.446 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2786361 ]] 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2786361 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2786361 00:05:12.446 13:46:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2786361 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.014 13:46:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.014 SPDK target shutdown done 00:05:13.014 13:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:13.014 Success 00:05:13.014 00:05:13.014 real 0m1.446s 00:05:13.014 user 0m1.076s 00:05:13.014 sys 0m0.528s 00:05:13.014 13:46:40 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.014 13:46:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.014 ************************************ 00:05:13.014 END TEST json_config_extra_key 00:05:13.014 ************************************ 00:05:13.014 13:46:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.014 13:46:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.014 13:46:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.014 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:05:13.014 ************************************ 00:05:13.014 START TEST alias_rpc 00:05:13.014 ************************************ 00:05:13.014 13:46:40 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.274 * Looking for test storage... 00:05:13.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:13.274 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.274 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2786713 00:05:13.274 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.274 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2786713 00:05:13.274 13:46:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2786713 ']' 00:05:13.274 13:46:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.274 13:46:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.274 13:46:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.274 13:46:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.274 13:46:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.274 [2024-07-26 13:46:40.524526] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:13.274 [2024-07-26 13:46:40.524572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786713 ] 00:05:13.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.274 [2024-07-26 13:46:40.578442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.274 [2024-07-26 13:46:40.652570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.212 13:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:14.212 13:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2786713 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2786713 ']' 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2786713 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786713 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786713' 00:05:14.212 killing process with pid 2786713 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@969 -- # kill 2786713 00:05:14.212 13:46:41 alias_rpc -- common/autotest_common.sh@974 -- # wait 2786713 00:05:14.472 00:05:14.472 real 0m1.478s 00:05:14.472 user 0m1.625s 00:05:14.472 sys 0m0.394s 00:05:14.472 13:46:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.472 13:46:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.472 ************************************ 00:05:14.472 END TEST alias_rpc 00:05:14.472 ************************************ 00:05:14.732 13:46:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:14.732 13:46:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:14.732 13:46:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.732 13:46:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.732 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:05:14.732 ************************************ 00:05:14.732 START TEST spdkcli_tcp 00:05:14.732 ************************************ 00:05:14.732 13:46:41 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:14.732 * Looking for test storage... 00:05:14.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2786996 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2786996 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2786996 ']' 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.732 13:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.732 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:14.732 [2024-07-26 13:46:42.093038] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:14.733 [2024-07-26 13:46:42.093088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786996 ] 00:05:14.733 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.733 [2024-07-26 13:46:42.147189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.993 [2024-07-26 13:46:42.230139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.993 [2024-07-26 13:46:42.230143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.561 13:46:42 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.561 13:46:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:15.561 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2787051 00:05:15.561 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:15.561 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:15.870 [ 00:05:15.870 "bdev_malloc_delete", 00:05:15.870 "bdev_malloc_create", 00:05:15.870 "bdev_null_resize", 00:05:15.870 "bdev_null_delete", 00:05:15.870 "bdev_null_create", 00:05:15.870 "bdev_nvme_cuse_unregister", 00:05:15.870 "bdev_nvme_cuse_register", 00:05:15.870 "bdev_opal_new_user", 00:05:15.870 "bdev_opal_set_lock_state", 00:05:15.870 "bdev_opal_delete", 00:05:15.870 "bdev_opal_get_info", 00:05:15.870 "bdev_opal_create", 00:05:15.870 "bdev_nvme_opal_revert", 00:05:15.870 "bdev_nvme_opal_init", 00:05:15.870 "bdev_nvme_send_cmd", 00:05:15.870 "bdev_nvme_get_path_iostat", 00:05:15.870 "bdev_nvme_get_mdns_discovery_info", 00:05:15.870 "bdev_nvme_stop_mdns_discovery", 00:05:15.870 "bdev_nvme_start_mdns_discovery", 00:05:15.870 "bdev_nvme_set_multipath_policy", 00:05:15.870 "bdev_nvme_set_preferred_path", 00:05:15.870 "bdev_nvme_get_io_paths", 00:05:15.870 "bdev_nvme_remove_error_injection", 00:05:15.870 "bdev_nvme_add_error_injection", 00:05:15.870 "bdev_nvme_get_discovery_info", 00:05:15.870 "bdev_nvme_stop_discovery", 00:05:15.870 "bdev_nvme_start_discovery", 00:05:15.870 "bdev_nvme_get_controller_health_info", 00:05:15.870 "bdev_nvme_disable_controller", 00:05:15.870 "bdev_nvme_enable_controller", 00:05:15.870 "bdev_nvme_reset_controller", 00:05:15.870 "bdev_nvme_get_transport_statistics", 00:05:15.870 "bdev_nvme_apply_firmware", 00:05:15.870 "bdev_nvme_detach_controller", 00:05:15.870 "bdev_nvme_get_controllers", 00:05:15.870 "bdev_nvme_attach_controller", 00:05:15.870 "bdev_nvme_set_hotplug", 00:05:15.870 "bdev_nvme_set_options", 00:05:15.870 "bdev_passthru_delete", 00:05:15.870 "bdev_passthru_create", 00:05:15.870 "bdev_lvol_set_parent_bdev", 00:05:15.870 "bdev_lvol_set_parent", 00:05:15.870 "bdev_lvol_check_shallow_copy", 00:05:15.870 "bdev_lvol_start_shallow_copy", 00:05:15.870 "bdev_lvol_grow_lvstore", 00:05:15.870 "bdev_lvol_get_lvols", 00:05:15.870 "bdev_lvol_get_lvstores", 00:05:15.870 "bdev_lvol_delete", 00:05:15.870 "bdev_lvol_set_read_only", 00:05:15.870 "bdev_lvol_resize", 00:05:15.870 "bdev_lvol_decouple_parent", 00:05:15.870 "bdev_lvol_inflate", 00:05:15.870 "bdev_lvol_rename", 00:05:15.870 "bdev_lvol_clone_bdev", 00:05:15.870 "bdev_lvol_clone", 00:05:15.871 "bdev_lvol_snapshot", 00:05:15.871 "bdev_lvol_create", 00:05:15.871 "bdev_lvol_delete_lvstore", 00:05:15.871 "bdev_lvol_rename_lvstore", 00:05:15.871 "bdev_lvol_create_lvstore", 00:05:15.871 "bdev_raid_set_options", 00:05:15.871 "bdev_raid_remove_base_bdev", 00:05:15.871 "bdev_raid_add_base_bdev", 00:05:15.871 "bdev_raid_delete", 00:05:15.871 "bdev_raid_create", 00:05:15.871 "bdev_raid_get_bdevs", 00:05:15.871 "bdev_error_inject_error", 00:05:15.871 "bdev_error_delete", 00:05:15.871 "bdev_error_create", 00:05:15.871 "bdev_split_delete", 00:05:15.871 "bdev_split_create", 00:05:15.871 "bdev_delay_delete", 00:05:15.871 "bdev_delay_create", 00:05:15.871 "bdev_delay_update_latency", 00:05:15.871 "bdev_zone_block_delete", 00:05:15.871 "bdev_zone_block_create", 00:05:15.871 "blobfs_create", 00:05:15.871 "blobfs_detect", 00:05:15.871 "blobfs_set_cache_size", 00:05:15.871 "bdev_aio_delete", 00:05:15.871 "bdev_aio_rescan", 00:05:15.871 "bdev_aio_create", 00:05:15.871 "bdev_ftl_set_property", 00:05:15.871 "bdev_ftl_get_properties", 00:05:15.871 "bdev_ftl_get_stats", 00:05:15.871 "bdev_ftl_unmap", 00:05:15.871 "bdev_ftl_unload", 00:05:15.871 "bdev_ftl_delete", 00:05:15.871 "bdev_ftl_load", 00:05:15.871 "bdev_ftl_create", 00:05:15.871 "bdev_virtio_attach_controller", 00:05:15.871 "bdev_virtio_scsi_get_devices", 00:05:15.871 "bdev_virtio_detach_controller", 00:05:15.871 "bdev_virtio_blk_set_hotplug", 00:05:15.871 "bdev_iscsi_delete", 00:05:15.871 "bdev_iscsi_create", 00:05:15.871 "bdev_iscsi_set_options", 00:05:15.871 "accel_error_inject_error", 00:05:15.871 "ioat_scan_accel_module", 00:05:15.871 "dsa_scan_accel_module", 00:05:15.871 "iaa_scan_accel_module", 00:05:15.871 "vfu_virtio_create_scsi_endpoint", 00:05:15.871 "vfu_virtio_scsi_remove_target", 00:05:15.871 "vfu_virtio_scsi_add_target", 00:05:15.871 "vfu_virtio_create_blk_endpoint", 00:05:15.871 "vfu_virtio_delete_endpoint", 00:05:15.871 "keyring_file_remove_key", 00:05:15.871 "keyring_file_add_key", 00:05:15.871 "keyring_linux_set_options", 00:05:15.871 "iscsi_get_histogram", 00:05:15.871 "iscsi_enable_histogram", 00:05:15.871 "iscsi_set_options", 00:05:15.871 "iscsi_get_auth_groups", 00:05:15.871 "iscsi_auth_group_remove_secret", 00:05:15.871 "iscsi_auth_group_add_secret", 00:05:15.871 "iscsi_delete_auth_group", 00:05:15.871 "iscsi_create_auth_group", 00:05:15.871 "iscsi_set_discovery_auth", 00:05:15.871 "iscsi_get_options", 00:05:15.871 "iscsi_target_node_request_logout", 00:05:15.871 "iscsi_target_node_set_redirect", 00:05:15.871 "iscsi_target_node_set_auth", 00:05:15.871 "iscsi_target_node_add_lun", 00:05:15.871 "iscsi_get_stats", 00:05:15.871 "iscsi_get_connections", 00:05:15.871 "iscsi_portal_group_set_auth", 00:05:15.871 "iscsi_start_portal_group", 00:05:15.871 "iscsi_delete_portal_group", 00:05:15.871 "iscsi_create_portal_group", 00:05:15.871 "iscsi_get_portal_groups", 00:05:15.871 "iscsi_delete_target_node", 00:05:15.871 "iscsi_target_node_remove_pg_ig_maps", 00:05:15.871 "iscsi_target_node_add_pg_ig_maps", 00:05:15.871 "iscsi_create_target_node", 00:05:15.871 "iscsi_get_target_nodes", 00:05:15.871 "iscsi_delete_initiator_group", 00:05:15.871 "iscsi_initiator_group_remove_initiators", 00:05:15.871 "iscsi_initiator_group_add_initiators", 00:05:15.871 "iscsi_create_initiator_group", 00:05:15.871 "iscsi_get_initiator_groups", 00:05:15.871 "nvmf_set_crdt", 00:05:15.871 "nvmf_set_config", 00:05:15.871 "nvmf_set_max_subsystems", 00:05:15.871 "nvmf_stop_mdns_prr", 00:05:15.871 "nvmf_publish_mdns_prr", 00:05:15.871 "nvmf_subsystem_get_listeners", 00:05:15.871 "nvmf_subsystem_get_qpairs", 00:05:15.871 "nvmf_subsystem_get_controllers", 00:05:15.871 "nvmf_get_stats", 00:05:15.871 "nvmf_get_transports", 00:05:15.871 "nvmf_create_transport", 00:05:15.871 "nvmf_get_targets", 00:05:15.871 "nvmf_delete_target", 00:05:15.871 "nvmf_create_target", 00:05:15.871 "nvmf_subsystem_allow_any_host", 00:05:15.871 "nvmf_subsystem_remove_host", 00:05:15.871 "nvmf_subsystem_add_host", 00:05:15.871 "nvmf_ns_remove_host", 00:05:15.871 "nvmf_ns_add_host", 00:05:15.871 "nvmf_subsystem_remove_ns", 00:05:15.871 "nvmf_subsystem_add_ns", 00:05:15.871 "nvmf_subsystem_listener_set_ana_state", 00:05:15.871 "nvmf_discovery_get_referrals", 00:05:15.871 "nvmf_discovery_remove_referral", 00:05:15.871 "nvmf_discovery_add_referral", 00:05:15.871 "nvmf_subsystem_remove_listener", 00:05:15.871 "nvmf_subsystem_add_listener", 00:05:15.871 "nvmf_delete_subsystem", 00:05:15.871 "nvmf_create_subsystem", 00:05:15.871 "nvmf_get_subsystems", 00:05:15.871 "env_dpdk_get_mem_stats", 00:05:15.871 "nbd_get_disks", 00:05:15.871 "nbd_stop_disk", 00:05:15.871 "nbd_start_disk", 00:05:15.871 "ublk_recover_disk", 00:05:15.871 "ublk_get_disks", 00:05:15.871 "ublk_stop_disk", 00:05:15.871 "ublk_start_disk", 00:05:15.871 "ublk_destroy_target", 00:05:15.871 "ublk_create_target", 00:05:15.871 "virtio_blk_create_transport", 00:05:15.871 "virtio_blk_get_transports", 00:05:15.871 "vhost_controller_set_coalescing", 00:05:15.871 "vhost_get_controllers", 00:05:15.871 "vhost_delete_controller", 00:05:15.871 "vhost_create_blk_controller", 00:05:15.871 "vhost_scsi_controller_remove_target", 00:05:15.871 "vhost_scsi_controller_add_target", 00:05:15.871 "vhost_start_scsi_controller", 00:05:15.871 "vhost_create_scsi_controller", 00:05:15.871 "thread_set_cpumask", 00:05:15.871 "framework_get_governor", 00:05:15.871 "framework_get_scheduler", 00:05:15.871 "framework_set_scheduler", 00:05:15.871 "framework_get_reactors", 00:05:15.871 "thread_get_io_channels", 00:05:15.871 "thread_get_pollers", 00:05:15.871 "thread_get_stats", 00:05:15.871 "framework_monitor_context_switch", 00:05:15.871 "spdk_kill_instance", 00:05:15.871 "log_enable_timestamps", 00:05:15.871 "log_get_flags", 00:05:15.871 "log_clear_flag", 00:05:15.871 "log_set_flag", 00:05:15.871 "log_get_level", 00:05:15.871 "log_set_level", 00:05:15.871 "log_get_print_level", 00:05:15.871 "log_set_print_level", 00:05:15.871 "framework_enable_cpumask_locks", 00:05:15.871 "framework_disable_cpumask_locks", 00:05:15.871 "framework_wait_init", 00:05:15.871 "framework_start_init", 00:05:15.871 "scsi_get_devices", 00:05:15.871 "bdev_get_histogram", 00:05:15.871 "bdev_enable_histogram", 00:05:15.871 "bdev_set_qos_limit", 00:05:15.871 "bdev_set_qd_sampling_period", 00:05:15.871 "bdev_get_bdevs", 00:05:15.871 "bdev_reset_iostat", 00:05:15.871 "bdev_get_iostat", 00:05:15.871 "bdev_examine", 00:05:15.871 "bdev_wait_for_examine", 00:05:15.871 "bdev_set_options", 00:05:15.871 "notify_get_notifications", 00:05:15.871 "notify_get_types", 00:05:15.871 "accel_get_stats", 00:05:15.871 "accel_set_options", 00:05:15.871 "accel_set_driver", 00:05:15.871 "accel_crypto_key_destroy", 00:05:15.871 "accel_crypto_keys_get", 00:05:15.871 "accel_crypto_key_create", 00:05:15.871 "accel_assign_opc", 00:05:15.871 "accel_get_module_info", 00:05:15.871 "accel_get_opc_assignments", 00:05:15.871 "vmd_rescan", 00:05:15.871 "vmd_remove_device", 00:05:15.871 "vmd_enable", 00:05:15.871 "sock_get_default_impl", 00:05:15.871 "sock_set_default_impl", 00:05:15.871 "sock_impl_set_options", 00:05:15.871 "sock_impl_get_options", 00:05:15.871 "iobuf_get_stats", 00:05:15.871 "iobuf_set_options", 00:05:15.871 "keyring_get_keys", 00:05:15.871 "framework_get_pci_devices", 00:05:15.871 "framework_get_config", 00:05:15.871 "framework_get_subsystems", 00:05:15.871 "vfu_tgt_set_base_path", 00:05:15.871 "trace_get_info", 00:05:15.871 "trace_get_tpoint_group_mask", 00:05:15.871 "trace_disable_tpoint_group", 00:05:15.871 "trace_enable_tpoint_group", 00:05:15.871 "trace_clear_tpoint_mask", 00:05:15.871 "trace_set_tpoint_mask", 00:05:15.871 "spdk_get_version", 00:05:15.871 "rpc_get_methods" 00:05:15.871 ] 00:05:15.871 13:46:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.871 13:46:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:15.871 13:46:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2786996 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2786996 ']' 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2786996 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786996 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786996' 00:05:15.871 killing process with pid 2786996 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2786996 00:05:15.871 13:46:43 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2786996 00:05:16.131 00:05:16.131 real 0m1.510s 00:05:16.131 user 0m2.825s 00:05:16.131 sys 0m0.421s 00:05:16.131 13:46:43 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.131 13:46:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.131 ************************************ 00:05:16.131 END TEST spdkcli_tcp 00:05:16.131 ************************************ 00:05:16.131 13:46:43 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.131 13:46:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.131 13:46:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.131 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:05:16.131 ************************************ 00:05:16.131 START TEST dpdk_mem_utility 00:05:16.131 ************************************ 00:05:16.131 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.391 * Looking for test storage... 00:05:16.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:16.391 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:16.392 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2787309 00:05:16.392 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2787309 00:05:16.392 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2787309 ']' 00:05:16.392 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.392 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.392 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.392 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.392 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.392 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.392 [2024-07-26 13:46:43.637202] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:16.392 [2024-07-26 13:46:43.637253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787309 ] 00:05:16.392 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.392 [2024-07-26 13:46:43.689126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.392 [2024-07-26 13:46:43.769403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.331 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.331 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:17.331 13:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.331 13:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.331 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.331 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.331 { 00:05:17.331 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.331 } 00:05:17.331 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.331 13:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:17.331 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:17.331 1 heaps totaling size 814.000000 MiB 00:05:17.331 size: 814.000000 MiB heap id: 0 00:05:17.332 end heaps---------- 00:05:17.332 8 mempools totaling size 598.116089 MiB 00:05:17.332 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.332 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.332 size: 84.521057 MiB name: bdev_io_2787309 00:05:17.332 size: 51.011292 MiB name: evtpool_2787309 00:05:17.332 size: 50.003479 MiB name: msgpool_2787309 00:05:17.332 size: 21.763794 MiB name: PDU_Pool 00:05:17.332 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.332 size: 0.026123 MiB name: Session_Pool 00:05:17.332 end mempools------- 00:05:17.332 6 memzones totaling size 4.142822 MiB 00:05:17.332 size: 1.000366 MiB name: RG_ring_0_2787309 00:05:17.332 size: 1.000366 MiB name: RG_ring_1_2787309 00:05:17.332 size: 1.000366 MiB name: RG_ring_4_2787309 00:05:17.332 size: 1.000366 MiB name: RG_ring_5_2787309 00:05:17.332 size: 0.125366 MiB name: RG_ring_2_2787309 00:05:17.332 size: 0.015991 MiB name: RG_ring_3_2787309 00:05:17.332 end memzones------- 00:05:17.332 13:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.332 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:17.332 list of free elements. size: 12.519348 MiB 00:05:17.332 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:17.332 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:17.332 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:17.332 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:17.332 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:17.332 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:17.332 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:17.332 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:17.332 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:17.332 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:17.332 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:17.332 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:17.332 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:17.332 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:17.332 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:17.332 list of standard malloc elements. size: 199.218079 MiB 00:05:17.332 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:17.332 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:17.332 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:17.332 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:17.332 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:17.332 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:17.332 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:17.332 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:17.332 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:17.332 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:17.332 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:17.332 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:17.332 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:17.332 list of memzone associated elements. size: 602.262573 MiB 00:05:17.332 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:17.332 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.332 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:17.332 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.332 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:17.332 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2787309_0 00:05:17.332 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:17.332 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2787309_0 00:05:17.332 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:17.332 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2787309_0 00:05:17.332 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:17.332 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.332 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:17.332 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.332 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:17.332 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2787309 00:05:17.332 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:17.332 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2787309 00:05:17.332 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:17.332 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2787309 00:05:17.332 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:17.332 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.332 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:17.332 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.332 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:17.332 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.332 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:17.332 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.332 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:17.332 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2787309 00:05:17.332 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:17.332 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2787309 00:05:17.332 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:17.332 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2787309 00:05:17.332 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:17.332 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2787309 00:05:17.332 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:17.332 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2787309 00:05:17.332 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:17.332 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.332 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:17.332 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.332 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:17.332 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.332 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:17.332 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2787309 00:05:17.332 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:17.332 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.332 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:17.332 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.332 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:17.332 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2787309 00:05:17.332 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:17.332 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.332 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:17.332 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2787309 00:05:17.332 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:17.332 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2787309 00:05:17.332 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:17.332 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.332 13:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.332 13:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2787309 00:05:17.332 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2787309 ']' 00:05:17.332 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2787309 00:05:17.332 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2787309 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2787309' 00:05:17.333 killing process with pid 2787309 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2787309 00:05:17.333 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2787309 00:05:17.592 00:05:17.592 real 0m1.360s 00:05:17.592 user 0m1.443s 00:05:17.592 sys 0m0.371s 00:05:17.592 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.592 13:46:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.592 ************************************ 00:05:17.592 END TEST dpdk_mem_utility 00:05:17.592 ************************************ 00:05:17.592 13:46:44 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:17.592 13:46:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.592 13:46:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.592 13:46:44 -- common/autotest_common.sh@10 -- # set +x 00:05:17.592 ************************************ 00:05:17.592 START TEST event 00:05:17.592 ************************************ 00:05:17.592 13:46:44 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:17.592 * Looking for test storage... 00:05:17.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:17.592 13:46:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:17.592 13:46:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.592 13:46:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.592 13:46:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:17.592 13:46:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.592 13:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.852 ************************************ 00:05:17.852 START TEST event_perf 00:05:17.852 ************************************ 00:05:17.852 13:46:45 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.852 Running I/O for 1 seconds...[2024-07-26 13:46:45.078022] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:17.852 [2024-07-26 13:46:45.078101] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787594 ] 00:05:17.852 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.852 [2024-07-26 13:46:45.135222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.852 [2024-07-26 13:46:45.210660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.852 [2024-07-26 13:46:45.210760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.852 [2024-07-26 13:46:45.210844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.852 [2024-07-26 13:46:45.210845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.233 Running I/O for 1 seconds... 00:05:19.233 lcore 0: 207019 00:05:19.233 lcore 1: 207019 00:05:19.233 lcore 2: 207018 00:05:19.233 lcore 3: 207019 00:05:19.233 done. 00:05:19.233 00:05:19.233 real 0m1.224s 00:05:19.233 user 0m4.141s 00:05:19.233 sys 0m0.079s 00:05:19.233 13:46:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.233 13:46:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.233 ************************************ 00:05:19.233 END TEST event_perf 00:05:19.233 ************************************ 00:05:19.233 13:46:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.233 13:46:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:19.233 13:46:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.233 13:46:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.233 ************************************ 00:05:19.233 START TEST event_reactor 00:05:19.233 ************************************ 00:05:19.233 13:46:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.233 [2024-07-26 13:46:46.372125] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:19.233 [2024-07-26 13:46:46.372195] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787844 ] 00:05:19.233 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.233 [2024-07-26 13:46:46.430788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.233 [2024-07-26 13:46:46.500600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.172 test_start 00:05:20.172 oneshot 00:05:20.172 tick 100 00:05:20.172 tick 100 00:05:20.172 tick 250 00:05:20.172 tick 100 00:05:20.172 tick 100 00:05:20.172 tick 100 00:05:20.172 tick 250 00:05:20.172 tick 500 00:05:20.172 tick 100 00:05:20.172 tick 100 00:05:20.172 tick 250 00:05:20.172 tick 100 00:05:20.172 tick 100 00:05:20.172 test_end 00:05:20.172 00:05:20.172 real 0m1.217s 00:05:20.172 user 0m1.137s 00:05:20.172 sys 0m0.076s 00:05:20.172 13:46:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.172 13:46:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 ************************************ 00:05:20.172 END TEST event_reactor 00:05:20.172 ************************************ 00:05:20.172 13:46:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.172 13:46:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:20.172 13:46:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.172 13:46:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 ************************************ 00:05:20.432 START TEST event_reactor_perf 00:05:20.432 ************************************ 00:05:20.432 13:46:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.432 [2024-07-26 13:46:47.659932] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:20.432 [2024-07-26 13:46:47.660004] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788098 ] 00:05:20.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.432 [2024-07-26 13:46:47.717444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.432 [2024-07-26 13:46:47.787452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.815 test_start 00:05:21.815 test_end 00:05:21.815 Performance: 506241 events per second 00:05:21.815 00:05:21.815 real 0m1.217s 00:05:21.815 user 0m1.135s 00:05:21.815 sys 0m0.078s 00:05:21.815 13:46:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.815 13:46:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.815 ************************************ 00:05:21.815 END TEST event_reactor_perf 00:05:21.815 ************************************ 00:05:21.815 13:46:48 event -- event/event.sh@49 -- # uname -s 00:05:21.815 13:46:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.815 13:46:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:21.815 13:46:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.815 13:46:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.815 13:46:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.815 ************************************ 00:05:21.815 START TEST event_scheduler 00:05:21.815 ************************************ 00:05:21.815 13:46:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:21.815 * Looking for test storage... 00:05:21.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:21.815 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.815 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2788373 00:05:21.815 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.815 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.815 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2788373 00:05:21.815 13:46:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2788373 ']' 00:05:21.815 13:46:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.815 13:46:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.815 13:46:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.815 13:46:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.815 13:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.815 [2024-07-26 13:46:49.067095] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:21.815 [2024-07-26 13:46:49.067145] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788373 ] 00:05:21.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.815 [2024-07-26 13:46:49.122037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.815 [2024-07-26 13:46:49.197634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.815 [2024-07-26 13:46:49.197724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.815 [2024-07-26 13:46:49.197808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.815 [2024-07-26 13:46:49.197810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:22.755 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 [2024-07-26 13:46:49.888203] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:22.755 [2024-07-26 13:46:49.888223] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.755 [2024-07-26 13:46:49.888232] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:22.755 [2024-07-26 13:46:49.888237] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:22.755 [2024-07-26 13:46:49.888243] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 [2024-07-26 13:46:49.959942] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.755 13:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 ************************************ 00:05:22.755 START TEST scheduler_create_thread 00:05:22.755 ************************************ 00:05:22.755 13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:22.755 13:46:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.755 13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 2 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 3 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 4 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 5 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 6 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.755 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 7 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 8 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 9 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 10 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.756 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.324 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.325 13:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.325 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.325 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.702 13:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.702 13:46:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.702 13:46:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.702 13:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.702 13:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.078 13:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.078 00:05:26.078 real 0m3.098s 00:05:26.078 user 0m0.024s 00:05:26.078 sys 0m0.004s 00:05:26.078 13:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.078 13:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.079 ************************************ 00:05:26.079 END TEST scheduler_create_thread 00:05:26.079 ************************************ 00:05:26.079 13:46:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.079 13:46:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2788373 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2788373 ']' 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2788373 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788373 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788373' 00:05:26.079 killing process with pid 2788373 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2788373 00:05:26.079 13:46:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2788373 00:05:26.079 [2024-07-26 13:46:53.471076] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.338 00:05:26.338 real 0m4.756s 00:05:26.338 user 0m9.300s 00:05:26.338 sys 0m0.350s 00:05:26.338 13:46:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.338 13:46:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.338 ************************************ 00:05:26.338 END TEST event_scheduler 00:05:26.338 ************************************ 00:05:26.338 13:46:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.338 13:46:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.338 13:46:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.338 13:46:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.338 13:46:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.338 ************************************ 00:05:26.338 START TEST app_repeat 00:05:26.338 ************************************ 00:05:26.338 13:46:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2789126 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2789126' 00:05:26.338 Process app_repeat pid: 2789126 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.338 spdk_app_start Round 0 00:05:26.338 13:46:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2789126 /var/tmp/spdk-nbd.sock 00:05:26.597 13:46:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2789126 ']' 00:05:26.597 13:46:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.597 13:46:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.597 13:46:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.597 13:46:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.597 13:46:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.597 [2024-07-26 13:46:53.800800] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:26.597 [2024-07-26 13:46:53.800852] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789126 ] 00:05:26.597 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.597 [2024-07-26 13:46:53.858433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.597 [2024-07-26 13:46:53.939896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.597 [2024-07-26 13:46:53.939899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.534 13:46:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.534 13:46:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.534 13:46:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.534 Malloc0 00:05:27.534 13:46:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.534 Malloc1 00:05:27.534 13:46:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.534 13:46:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.793 13:46:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.793 13:46:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.793 13:46:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.793 13:46:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.793 13:46:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.793 13:46:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.793 /dev/nbd0 00:05:27.793 13:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.793 13:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.793 1+0 records in 00:05:27.793 1+0 records out 00:05:27.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188905 s, 21.7 MB/s 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.793 13:46:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.794 13:46:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.794 13:46:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.794 13:46:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.794 13:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.794 13:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.794 13:46:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.052 /dev/nbd1 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.052 1+0 records in 00:05:28.052 1+0 records out 00:05:28.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244204 s, 16.8 MB/s 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.052 13:46:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.052 13:46:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.311 { 00:05:28.311 "nbd_device": "/dev/nbd0", 00:05:28.311 "bdev_name": "Malloc0" 00:05:28.311 }, 00:05:28.311 { 00:05:28.311 "nbd_device": "/dev/nbd1", 00:05:28.311 "bdev_name": "Malloc1" 00:05:28.311 } 00:05:28.311 ]' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.311 { 00:05:28.311 "nbd_device": "/dev/nbd0", 00:05:28.311 "bdev_name": "Malloc0" 00:05:28.311 }, 00:05:28.311 { 00:05:28.311 "nbd_device": "/dev/nbd1", 00:05:28.311 "bdev_name": "Malloc1" 00:05:28.311 } 00:05:28.311 ]' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.311 /dev/nbd1' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.311 /dev/nbd1' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.311 256+0 records in 00:05:28.311 256+0 records out 00:05:28.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103189 s, 102 MB/s 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.311 256+0 records in 00:05:28.311 256+0 records out 00:05:28.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013465 s, 77.9 MB/s 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.311 13:46:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.311 256+0 records in 00:05:28.311 256+0 records out 00:05:28.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146523 s, 71.6 MB/s 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.312 13:46:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.570 13:46:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.570 13:46:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.570 13:46:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.570 13:46:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.571 13:46:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.571 13:46:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.571 13:46:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.571 13:46:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.571 13:46:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.571 13:46:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.830 13:46:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.830 13:46:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.089 13:46:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.348 [2024-07-26 13:46:56.636419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.348 [2024-07-26 13:46:56.702767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.348 [2024-07-26 13:46:56.702771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.348 [2024-07-26 13:46:56.743401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.348 [2024-07-26 13:46:56.743446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.631 13:46:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.631 13:46:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.631 spdk_app_start Round 1 00:05:32.631 13:46:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2789126 /var/tmp/spdk-nbd.sock 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2789126 ']' 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.631 13:46:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.631 13:46:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.631 Malloc0 00:05:32.631 13:46:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.631 Malloc1 00:05:32.631 13:46:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.631 13:46:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.892 /dev/nbd0 00:05:32.892 13:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.892 13:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.892 13:47:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.892 13:47:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.892 13:47:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.892 13:47:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.892 13:47:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.892 13:47:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.893 1+0 records in 00:05:32.893 1+0 records out 00:05:32.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206599 s, 19.8 MB/s 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.893 13:47:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.893 13:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.893 13:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.893 13:47:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.152 /dev/nbd1 00:05:33.152 13:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.152 13:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.152 1+0 records in 00:05:33.152 1+0 records out 00:05:33.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206265 s, 19.9 MB/s 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.152 13:47:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.152 13:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.152 13:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.152 13:47:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.153 13:47:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.153 13:47:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.153 13:47:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.153 { 00:05:33.153 "nbd_device": "/dev/nbd0", 00:05:33.153 "bdev_name": "Malloc0" 00:05:33.153 }, 00:05:33.153 { 00:05:33.153 "nbd_device": "/dev/nbd1", 00:05:33.153 "bdev_name": "Malloc1" 00:05:33.153 } 00:05:33.153 ]' 00:05:33.153 13:47:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.153 { 00:05:33.153 "nbd_device": "/dev/nbd0", 00:05:33.153 "bdev_name": "Malloc0" 00:05:33.153 }, 00:05:33.153 { 00:05:33.153 "nbd_device": "/dev/nbd1", 00:05:33.153 "bdev_name": "Malloc1" 00:05:33.153 } 00:05:33.153 ]' 00:05:33.153 13:47:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.412 /dev/nbd1' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.412 /dev/nbd1' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.412 256+0 records in 00:05:33.412 256+0 records out 00:05:33.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103464 s, 101 MB/s 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.412 256+0 records in 00:05:33.412 256+0 records out 00:05:33.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136234 s, 77.0 MB/s 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.412 256+0 records in 00:05:33.412 256+0 records out 00:05:33.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141146 s, 74.3 MB/s 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.412 13:47:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.672 13:47:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.672 13:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.931 13:47:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.931 13:47:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.191 13:47:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.451 [2024-07-26 13:47:01.652831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.451 [2024-07-26 13:47:01.721528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.451 [2024-07-26 13:47:01.721530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.451 [2024-07-26 13:47:01.762422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.451 [2024-07-26 13:47:01.762463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.777 13:47:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.777 13:47:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.777 spdk_app_start Round 2 00:05:37.777 13:47:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2789126 /var/tmp/spdk-nbd.sock 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2789126 ']' 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.777 13:47:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.777 13:47:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.777 Malloc0 00:05:37.777 13:47:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.777 Malloc1 00:05:37.777 13:47:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.777 /dev/nbd0 00:05:37.777 13:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.037 1+0 records in 00:05:38.037 1+0 records out 00:05:38.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224037 s, 18.3 MB/s 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.037 /dev/nbd1 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.037 1+0 records in 00:05:38.037 1+0 records out 00:05:38.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237605 s, 17.2 MB/s 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.037 13:47:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.037 13:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.297 { 00:05:38.297 "nbd_device": "/dev/nbd0", 00:05:38.297 "bdev_name": "Malloc0" 00:05:38.297 }, 00:05:38.297 { 00:05:38.297 "nbd_device": "/dev/nbd1", 00:05:38.297 "bdev_name": "Malloc1" 00:05:38.297 } 00:05:38.297 ]' 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.297 { 00:05:38.297 "nbd_device": "/dev/nbd0", 00:05:38.297 "bdev_name": "Malloc0" 00:05:38.297 }, 00:05:38.297 { 00:05:38.297 "nbd_device": "/dev/nbd1", 00:05:38.297 "bdev_name": "Malloc1" 00:05:38.297 } 00:05:38.297 ]' 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.297 /dev/nbd1' 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.297 /dev/nbd1' 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.297 13:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.298 256+0 records in 00:05:38.298 256+0 records out 00:05:38.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010325 s, 102 MB/s 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.298 256+0 records in 00:05:38.298 256+0 records out 00:05:38.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142309 s, 73.7 MB/s 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.298 256+0 records in 00:05:38.298 256+0 records out 00:05:38.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148985 s, 70.4 MB/s 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.298 13:47:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.557 13:47:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.816 13:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.076 13:47:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.076 13:47:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.336 13:47:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.336 [2024-07-26 13:47:06.707018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.596 [2024-07-26 13:47:06.774082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.596 [2024-07-26 13:47:06.774084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.596 [2024-07-26 13:47:06.814950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.596 [2024-07-26 13:47:06.814993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.133 13:47:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2789126 /var/tmp/spdk-nbd.sock 00:05:42.133 13:47:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2789126 ']' 00:05:42.133 13:47:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.133 13:47:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.133 13:47:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.133 13:47:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.133 13:47:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.393 13:47:09 event.app_repeat -- event/event.sh@39 -- # killprocess 2789126 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2789126 ']' 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2789126 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2789126 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2789126' 00:05:42.393 killing process with pid 2789126 00:05:42.393 13:47:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2789126 00:05:42.394 13:47:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2789126 00:05:42.653 spdk_app_start is called in Round 0. 00:05:42.653 Shutdown signal received, stop current app iteration 00:05:42.653 Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 reinitialization... 00:05:42.653 spdk_app_start is called in Round 1. 00:05:42.653 Shutdown signal received, stop current app iteration 00:05:42.653 Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 reinitialization... 00:05:42.653 spdk_app_start is called in Round 2. 00:05:42.653 Shutdown signal received, stop current app iteration 00:05:42.653 Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 reinitialization... 00:05:42.653 spdk_app_start is called in Round 3. 00:05:42.653 Shutdown signal received, stop current app iteration 00:05:42.653 13:47:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.653 13:47:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.653 00:05:42.653 real 0m16.145s 00:05:42.653 user 0m35.096s 00:05:42.653 sys 0m2.341s 00:05:42.653 13:47:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.653 13:47:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.653 ************************************ 00:05:42.653 END TEST app_repeat 00:05:42.653 ************************************ 00:05:42.653 13:47:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.653 13:47:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:42.653 13:47:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.653 13:47:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.653 13:47:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.653 ************************************ 00:05:42.653 START TEST cpu_locks 00:05:42.653 ************************************ 00:05:42.653 13:47:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:42.653 * Looking for test storage... 00:05:42.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:42.653 13:47:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.653 13:47:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.653 13:47:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.653 13:47:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.653 13:47:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.653 13:47:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.653 13:47:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 ************************************ 00:05:42.913 START TEST default_locks 00:05:42.913 ************************************ 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2792109 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2792109 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2792109 ']' 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.913 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 [2024-07-26 13:47:10.154713] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:42.913 [2024-07-26 13:47:10.154756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792109 ] 00:05:42.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.913 [2024-07-26 13:47:10.210558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.913 [2024-07-26 13:47:10.288422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.851 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.851 13:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:43.851 13:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2792109 00:05:43.851 13:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2792109 00:05:43.851 13:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.111 lslocks: write error 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2792109 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2792109 ']' 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2792109 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2792109 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2792109' 00:05:44.111 killing process with pid 2792109 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2792109 00:05:44.111 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2792109 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2792109 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2792109 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2792109 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2792109 ']' 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2792109) - No such process 00:05:44.371 ERROR: process (pid: 2792109) is no longer running 00:05:44.371 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.372 00:05:44.372 real 0m1.679s 00:05:44.372 user 0m1.765s 00:05:44.372 sys 0m0.563s 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.372 13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.372 ************************************ 00:05:44.372 END TEST default_locks 00:05:44.372 ************************************ 00:05:44.632 13:47:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.632 13:47:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.632 13:47:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.632 13:47:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.632 ************************************ 00:05:44.632 START TEST default_locks_via_rpc 00:05:44.632 ************************************ 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2792491 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2792491 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2792491 ']' 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.632 13:47:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.632 [2024-07-26 13:47:11.903570] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:44.632 [2024-07-26 13:47:11.903609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792491 ] 00:05:44.632 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.632 [2024-07-26 13:47:11.955558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.632 [2024-07-26 13:47:12.027813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2792491 ']' 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2792491' 00:05:45.571 killing process with pid 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2792491 00:05:45.571 13:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2792491 00:05:45.831 00:05:45.831 real 0m1.351s 00:05:45.832 user 0m1.450s 00:05:45.832 sys 0m0.388s 00:05:45.832 13:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.832 13:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.832 ************************************ 00:05:45.832 END TEST default_locks_via_rpc 00:05:45.832 ************************************ 00:05:45.832 13:47:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.832 13:47:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.832 13:47:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.832 13:47:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.832 ************************************ 00:05:45.832 START TEST non_locking_app_on_locked_coremask 00:05:45.832 ************************************ 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2792771 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2792771 /var/tmp/spdk.sock 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2792771 ']' 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.832 13:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.092 [2024-07-26 13:47:13.312826] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:46.092 [2024-07-26 13:47:13.312867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792771 ] 00:05:46.092 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.092 [2024-07-26 13:47:13.366157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.092 [2024-07-26 13:47:13.446291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2792873 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2792873 /var/tmp/spdk2.sock 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2792873 ']' 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.033 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.033 [2024-07-26 13:47:14.149233] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:47.033 [2024-07-26 13:47:14.149281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792873 ] 00:05:47.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.033 [2024-07-26 13:47:14.220373] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.033 [2024-07-26 13:47:14.220394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.033 [2024-07-26 13:47:14.364503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.605 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.605 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.605 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2792771 00:05:47.605 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.605 13:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2792771 00:05:48.175 lslocks: write error 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2792771 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2792771 ']' 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2792771 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2792771 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2792771' 00:05:48.175 killing process with pid 2792771 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2792771 00:05:48.175 13:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2792771 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2792873 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2792873 ']' 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2792873 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2792873 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2792873' 00:05:48.744 killing process with pid 2792873 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2792873 00:05:48.744 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2792873 00:05:49.003 00:05:49.003 real 0m3.163s 00:05:49.003 user 0m3.389s 00:05:49.003 sys 0m0.914s 00:05:49.003 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.003 13:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.003 ************************************ 00:05:49.003 END TEST non_locking_app_on_locked_coremask 00:05:49.003 ************************************ 00:05:49.261 13:47:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.261 13:47:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.261 13:47:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.261 13:47:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.261 ************************************ 00:05:49.261 START TEST locking_app_on_unlocked_coremask 00:05:49.261 ************************************ 00:05:49.261 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:49.261 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2793358 00:05:49.261 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2793358 /var/tmp/spdk.sock 00:05:49.261 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2793358 ']' 00:05:49.262 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.262 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.262 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.262 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.262 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.262 13:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.262 [2024-07-26 13:47:16.535951] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:49.262 [2024-07-26 13:47:16.535989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793358 ] 00:05:49.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.262 [2024-07-26 13:47:16.587047] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.262 [2024-07-26 13:47:16.587071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.262 [2024-07-26 13:47:16.666903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2793465 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2793465 /var/tmp/spdk2.sock 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2793465 ']' 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.199 13:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.199 [2024-07-26 13:47:17.384585] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:50.199 [2024-07-26 13:47:17.384633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793465 ] 00:05:50.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.199 [2024-07-26 13:47:17.461616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.199 [2024-07-26 13:47:17.614067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.768 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.768 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.768 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2793465 00:05:50.768 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2793465 00:05:50.768 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.335 lslocks: write error 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2793358 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2793358 ']' 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2793358 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2793358 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2793358' 00:05:51.335 killing process with pid 2793358 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2793358 00:05:51.335 13:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2793358 00:05:51.904 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2793465 00:05:51.904 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2793465 ']' 00:05:51.904 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2793465 00:05:51.904 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2793465 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2793465' 00:05:52.163 killing process with pid 2793465 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2793465 00:05:52.163 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2793465 00:05:52.423 00:05:52.423 real 0m3.198s 00:05:52.423 user 0m3.418s 00:05:52.423 sys 0m0.923s 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.423 ************************************ 00:05:52.423 END TEST locking_app_on_unlocked_coremask 00:05:52.423 ************************************ 00:05:52.423 13:47:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.423 13:47:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.423 13:47:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.423 13:47:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.423 ************************************ 00:05:52.423 START TEST locking_app_on_locked_coremask 00:05:52.423 ************************************ 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2793863 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2793863 /var/tmp/spdk.sock 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2793863 ']' 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.423 13:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.423 [2024-07-26 13:47:19.808816] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:52.423 [2024-07-26 13:47:19.808862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793863 ] 00:05:52.423 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.683 [2024-07-26 13:47:19.862926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.683 [2024-07-26 13:47:19.931371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2794093 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2794093 /var/tmp/spdk2.sock 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2794093 /var/tmp/spdk2.sock 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2794093 /var/tmp/spdk2.sock 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2794093 ']' 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.252 13:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.252 [2024-07-26 13:47:20.647617] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:53.252 [2024-07-26 13:47:20.647662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794093 ] 00:05:53.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.511 [2024-07-26 13:47:20.724120] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2793863 has claimed it. 00:05:53.511 [2024-07-26 13:47:20.724156] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2794093) - No such process 00:05:54.080 ERROR: process (pid: 2794093) is no longer running 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2793863 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2793863 00:05:54.080 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.340 lslocks: write error 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2793863 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2793863 ']' 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2793863 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2793863 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2793863' 00:05:54.340 killing process with pid 2793863 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2793863 00:05:54.340 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2793863 00:05:54.602 00:05:54.602 real 0m2.208s 00:05:54.602 user 0m2.430s 00:05:54.602 sys 0m0.604s 00:05:54.602 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.603 13:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.603 ************************************ 00:05:54.603 END TEST locking_app_on_locked_coremask 00:05:54.603 ************************************ 00:05:54.603 13:47:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.603 13:47:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.603 13:47:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.603 13:47:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.603 ************************************ 00:05:54.603 START TEST locking_overlapped_coremask 00:05:54.603 ************************************ 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2794349 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2794349 /var/tmp/spdk.sock 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2794349 ']' 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.603 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.927 [2024-07-26 13:47:22.073772] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:54.927 [2024-07-26 13:47:22.073815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794349 ] 00:05:54.927 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.927 [2024-07-26 13:47:22.123388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.927 [2024-07-26 13:47:22.204299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.927 [2024-07-26 13:47:22.204319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.927 [2024-07-26 13:47:22.204321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2794508 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2794508 /var/tmp/spdk2.sock 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2794508 /var/tmp/spdk2.sock 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2794508 /var/tmp/spdk2.sock 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2794508 ']' 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.496 13:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.756 [2024-07-26 13:47:22.936366] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:55.756 [2024-07-26 13:47:22.936414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794508 ] 00:05:55.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.756 [2024-07-26 13:47:23.012933] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2794349 has claimed it. 00:05:55.756 [2024-07-26 13:47:23.012975] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2794508) - No such process 00:05:56.325 ERROR: process (pid: 2794508) is no longer running 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2794349 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2794349 ']' 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2794349 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794349 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.325 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794349' 00:05:56.325 killing process with pid 2794349 00:05:56.326 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2794349 00:05:56.326 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2794349 00:05:56.586 00:05:56.586 real 0m1.888s 00:05:56.586 user 0m5.380s 00:05:56.586 sys 0m0.389s 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.586 ************************************ 00:05:56.586 END TEST locking_overlapped_coremask 00:05:56.586 ************************************ 00:05:56.586 13:47:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.586 13:47:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.586 13:47:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.586 13:47:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.586 ************************************ 00:05:56.586 START TEST locking_overlapped_coremask_via_rpc 00:05:56.586 ************************************ 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2794630 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2794630 /var/tmp/spdk.sock 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2794630 ']' 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.586 13:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.846 [2024-07-26 13:47:24.040719] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:56.846 [2024-07-26 13:47:24.040765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794630 ] 00:05:56.846 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.846 [2024-07-26 13:47:24.095842] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.846 [2024-07-26 13:47:24.095866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.846 [2024-07-26 13:47:24.172854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.846 [2024-07-26 13:47:24.172952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.846 [2024-07-26 13:47:24.172954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.416 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.416 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:57.416 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2794857 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2794857 /var/tmp/spdk2.sock 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2794857 ']' 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.417 13:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.677 [2024-07-26 13:47:24.887778] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:57.677 [2024-07-26 13:47:24.887828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794857 ] 00:05:57.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.677 [2024-07-26 13:47:24.964991] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.677 [2024-07-26 13:47:24.965021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.677 [2024-07-26 13:47:25.110695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.936 [2024-07-26 13:47:25.114091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.936 [2024-07-26 13:47:25.114092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.504 [2024-07-26 13:47:25.705120] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2794630 has claimed it. 00:05:58.504 request: 00:05:58.504 { 00:05:58.504 "method": "framework_enable_cpumask_locks", 00:05:58.504 "req_id": 1 00:05:58.504 } 00:05:58.504 Got JSON-RPC error response 00:05:58.504 response: 00:05:58.504 { 00:05:58.504 "code": -32603, 00:05:58.504 "message": "Failed to claim CPU core: 2" 00:05:58.504 } 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.504 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2794630 /var/tmp/spdk.sock 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2794630 ']' 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2794857 /var/tmp/spdk2.sock 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2794857 ']' 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.505 13:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.764 00:05:58.764 real 0m2.099s 00:05:58.764 user 0m0.862s 00:05:58.764 sys 0m0.171s 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.764 13:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.764 ************************************ 00:05:58.764 END TEST locking_overlapped_coremask_via_rpc 00:05:58.764 ************************************ 00:05:58.764 13:47:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.764 13:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2794630 ]] 00:05:58.764 13:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2794630 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2794630 ']' 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2794630 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794630 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794630' 00:05:58.764 killing process with pid 2794630 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2794630 00:05:58.764 13:47:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2794630 00:05:59.333 13:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2794857 ]] 00:05:59.333 13:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2794857 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2794857 ']' 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2794857 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794857 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794857' 00:05:59.333 killing process with pid 2794857 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2794857 00:05:59.333 13:47:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2794857 00:05:59.593 13:47:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.593 13:47:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.593 13:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2794630 ]] 00:05:59.593 13:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2794630 00:05:59.593 13:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2794630 ']' 00:05:59.593 13:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2794630 00:05:59.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2794630) - No such process 00:05:59.594 13:47:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2794630 is not found' 00:05:59.594 Process with pid 2794630 is not found 00:05:59.594 13:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2794857 ]] 00:05:59.594 13:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2794857 00:05:59.594 13:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2794857 ']' 00:05:59.594 13:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2794857 00:05:59.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2794857) - No such process 00:05:59.594 13:47:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2794857 is not found' 00:05:59.594 Process with pid 2794857 is not found 00:05:59.594 13:47:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.594 00:05:59.594 real 0m16.877s 00:05:59.594 user 0m29.231s 00:05:59.594 sys 0m4.850s 00:05:59.594 13:47:26 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.594 13:47:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.594 ************************************ 00:05:59.594 END TEST cpu_locks 00:05:59.594 ************************************ 00:05:59.594 00:05:59.594 real 0m41.955s 00:05:59.594 user 1m20.238s 00:05:59.594 sys 0m8.128s 00:05:59.594 13:47:26 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.594 13:47:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.594 ************************************ 00:05:59.594 END TEST event 00:05:59.594 ************************************ 00:05:59.594 13:47:26 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.594 13:47:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.594 13:47:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.594 13:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:59.594 ************************************ 00:05:59.594 START TEST thread 00:05:59.594 ************************************ 00:05:59.594 13:47:26 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.853 * Looking for test storage... 00:05:59.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:59.853 13:47:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.853 13:47:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:59.853 13:47:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.853 13:47:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.853 ************************************ 00:05:59.853 START TEST thread_poller_perf 00:05:59.853 ************************************ 00:05:59.853 13:47:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.853 [2024-07-26 13:47:27.087489] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:05:59.854 [2024-07-26 13:47:27.087539] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795408 ] 00:05:59.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.854 [2024-07-26 13:47:27.135478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.854 [2024-07-26 13:47:27.206482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.854 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.253 ====================================== 00:06:01.253 busy:2309886750 (cyc) 00:06:01.253 total_run_count: 415000 00:06:01.253 tsc_hz: 2300000000 (cyc) 00:06:01.253 ====================================== 00:06:01.253 poller_cost: 5565 (cyc), 2419 (nsec) 00:06:01.253 00:06:01.253 real 0m1.204s 00:06:01.253 user 0m1.140s 00:06:01.253 sys 0m0.061s 00:06:01.253 13:47:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.253 13:47:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.253 ************************************ 00:06:01.253 END TEST thread_poller_perf 00:06:01.253 ************************************ 00:06:01.253 13:47:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.253 13:47:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:01.253 13:47:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.253 13:47:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.253 ************************************ 00:06:01.253 START TEST thread_poller_perf 00:06:01.253 ************************************ 00:06:01.253 13:47:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.253 [2024-07-26 13:47:28.361952] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:06:01.253 [2024-07-26 13:47:28.362024] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795593 ] 00:06:01.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.253 [2024-07-26 13:47:28.418472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.253 [2024-07-26 13:47:28.489515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.253 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.193 ====================================== 00:06:02.193 busy:2301644048 (cyc) 00:06:02.193 total_run_count: 5512000 00:06:02.193 tsc_hz: 2300000000 (cyc) 00:06:02.193 ====================================== 00:06:02.193 poller_cost: 417 (cyc), 181 (nsec) 00:06:02.193 00:06:02.193 real 0m1.216s 00:06:02.193 user 0m1.144s 00:06:02.193 sys 0m0.068s 00:06:02.193 13:47:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.193 13:47:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 ************************************ 00:06:02.193 END TEST thread_poller_perf 00:06:02.193 ************************************ 00:06:02.193 13:47:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.193 00:06:02.193 real 0m2.624s 00:06:02.193 user 0m2.365s 00:06:02.193 sys 0m0.266s 00:06:02.193 13:47:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.193 13:47:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 ************************************ 00:06:02.193 END TEST thread 00:06:02.193 ************************************ 00:06:02.193 13:47:29 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:02.193 13:47:29 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:02.193 13:47:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.193 13:47:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.193 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:02.453 ************************************ 00:06:02.453 START TEST app_cmdline 00:06:02.453 ************************************ 00:06:02.453 13:47:29 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:02.453 * Looking for test storage... 00:06:02.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:02.454 13:47:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:02.454 13:47:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2795911 00:06:02.454 13:47:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2795911 00:06:02.454 13:47:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:02.454 13:47:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2795911 ']' 00:06:02.454 13:47:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.454 13:47:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.454 13:47:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.454 13:47:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.454 13:47:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.454 [2024-07-26 13:47:29.797222] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:06:02.454 [2024-07-26 13:47:29.797274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795911 ] 00:06:02.454 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.454 [2024-07-26 13:47:29.852173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.713 [2024-07-26 13:47:29.932501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.282 13:47:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.282 13:47:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:03.282 13:47:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:03.542 { 00:06:03.542 "version": "SPDK v24.09-pre git sha1 a14c64d79", 00:06:03.542 "fields": { 00:06:03.542 "major": 24, 00:06:03.542 "minor": 9, 00:06:03.542 "patch": 0, 00:06:03.542 "suffix": "-pre", 00:06:03.542 "commit": "a14c64d79" 00:06:03.542 } 00:06:03.542 } 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.542 request: 00:06:03.542 { 00:06:03.542 "method": "env_dpdk_get_mem_stats", 00:06:03.542 "req_id": 1 00:06:03.542 } 00:06:03.542 Got JSON-RPC error response 00:06:03.542 response: 00:06:03.542 { 00:06:03.542 "code": -32601, 00:06:03.542 "message": "Method not found" 00:06:03.542 } 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.542 13:47:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2795911 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2795911 ']' 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2795911 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.542 13:47:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2795911 00:06:03.802 13:47:31 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.802 13:47:31 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.802 13:47:31 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2795911' 00:06:03.802 killing process with pid 2795911 00:06:03.802 13:47:31 app_cmdline -- common/autotest_common.sh@969 -- # kill 2795911 00:06:03.802 13:47:31 app_cmdline -- common/autotest_common.sh@974 -- # wait 2795911 00:06:04.062 00:06:04.062 real 0m1.663s 00:06:04.062 user 0m1.968s 00:06:04.062 sys 0m0.435s 00:06:04.062 13:47:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.062 13:47:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.062 ************************************ 00:06:04.062 END TEST app_cmdline 00:06:04.062 ************************************ 00:06:04.062 13:47:31 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.062 13:47:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.062 13:47:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.062 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:06:04.062 ************************************ 00:06:04.062 START TEST version 00:06:04.062 ************************************ 00:06:04.062 13:47:31 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.062 * Looking for test storage... 00:06:04.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:04.062 13:47:31 version -- app/version.sh@17 -- # get_header_version major 00:06:04.062 13:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # cut -f2 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.062 13:47:31 version -- app/version.sh@17 -- # major=24 00:06:04.062 13:47:31 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.062 13:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # cut -f2 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.062 13:47:31 version -- app/version.sh@18 -- # minor=9 00:06:04.062 13:47:31 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.062 13:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # cut -f2 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.062 13:47:31 version -- app/version.sh@19 -- # patch=0 00:06:04.062 13:47:31 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.062 13:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # cut -f2 00:06:04.062 13:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.062 13:47:31 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.062 13:47:31 version -- app/version.sh@22 -- # version=24.9 00:06:04.062 13:47:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.062 13:47:31 version -- app/version.sh@28 -- # version=24.9rc0 00:06:04.062 13:47:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:04.062 13:47:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.321 13:47:31 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:04.321 13:47:31 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:04.321 00:06:04.321 real 0m0.146s 00:06:04.321 user 0m0.078s 00:06:04.321 sys 0m0.105s 00:06:04.321 13:47:31 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.321 13:47:31 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.321 ************************************ 00:06:04.322 END TEST version 00:06:04.322 ************************************ 00:06:04.322 13:47:31 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@202 -- # uname -s 00:06:04.322 13:47:31 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:04.322 13:47:31 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:04.322 13:47:31 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:04.322 13:47:31 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:04.322 13:47:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.322 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:06:04.322 13:47:31 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:04.322 13:47:31 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:04.322 13:47:31 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:04.322 13:47:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.322 13:47:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.322 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:06:04.322 ************************************ 00:06:04.322 START TEST nvmf_tcp 00:06:04.322 ************************************ 00:06:04.322 13:47:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:04.322 * Looking for test storage... 00:06:04.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:04.322 13:47:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:04.322 13:47:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:04.322 13:47:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:04.322 13:47:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.322 13:47:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.322 13:47:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.322 ************************************ 00:06:04.322 START TEST nvmf_target_core 00:06:04.322 ************************************ 00:06:04.322 13:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:04.582 * Looking for test storage... 00:06:04.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.582 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.583 ************************************ 00:06:04.583 START TEST nvmf_abort 00:06:04.583 ************************************ 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:04.583 * Looking for test storage... 00:06:04.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.583 13:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.583 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.844 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:04.844 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:04.844 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:04.844 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.126 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:10.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:10.127 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:10.127 Found net devices under 0000:86:00.0: cvl_0_0 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:10.127 Found net devices under 0000:86:00.1: cvl_0_1 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.127 13:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:10.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:06:10.127 00:06:10.127 --- 10.0.0.2 ping statistics --- 00:06:10.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.127 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:06:10.127 00:06:10.127 --- 10.0.0.1 ping statistics --- 00:06:10.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.127 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:10.127 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2799371 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2799371 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2799371 ']' 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.128 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 [2024-07-26 13:47:37.296496] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:06:10.128 [2024-07-26 13:47:37.296542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.128 [2024-07-26 13:47:37.355321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.128 [2024-07-26 13:47:37.436477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.128 [2024-07-26 13:47:37.436513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.128 [2024-07-26 13:47:37.436520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.128 [2024-07-26 13:47:37.436526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.128 [2024-07-26 13:47:37.436531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.128 [2024-07-26 13:47:37.436629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.128 [2024-07-26 13:47:37.436715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.128 [2024-07-26 13:47:37.436717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.697 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.697 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:10.697 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:10.697 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.697 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 [2024-07-26 13:47:38.160305] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 Malloc0 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 Delay0 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 [2024-07-26 13:47:38.239680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.957 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.957 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.957 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.957 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.957 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.957 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:10.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.957 [2024-07-26 13:47:38.307266] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:13.493 Initializing NVMe Controllers 00:06:13.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:13.493 controller IO queue size 128 less than required 00:06:13.493 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:13.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:13.493 Initialization complete. Launching workers. 00:06:13.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41376 00:06:13.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41437, failed to submit 62 00:06:13.493 success 41380, unsuccess 57, failed 0 00:06:13.493 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:13.493 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.493 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:13.494 rmmod nvme_tcp 00:06:13.494 rmmod nvme_fabrics 00:06:13.494 rmmod nvme_keyring 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2799371 ']' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2799371 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2799371 ']' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2799371 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2799371 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2799371' 00:06:13.494 killing process with pid 2799371 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2799371 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2799371 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.494 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:15.406 00:06:15.406 real 0m10.898s 00:06:15.406 user 0m13.024s 00:06:15.406 sys 0m4.866s 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.406 ************************************ 00:06:15.406 END TEST nvmf_abort 00:06:15.406 ************************************ 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.406 13:47:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.666 ************************************ 00:06:15.666 START TEST nvmf_ns_hotplug_stress 00:06:15.666 ************************************ 00:06:15.666 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:15.666 * Looking for test storage... 00:06:15.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.666 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.666 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:15.666 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.666 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.667 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:20.998 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:20.998 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.998 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:20.999 Found net devices under 0000:86:00.0: cvl_0_0 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:20.999 Found net devices under 0000:86:00.1: cvl_0_1 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.999 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:06:20.999 00:06:20.999 --- 10.0.0.2 ping statistics --- 00:06:20.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.999 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:06:20.999 00:06:20.999 --- 10.0.0.1 ping statistics --- 00:06:20.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.999 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2803393 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2803393 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2803393 ']' 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.999 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.999 [2024-07-26 13:47:48.291648] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:06:20.999 [2024-07-26 13:47:48.291694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.999 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.999 [2024-07-26 13:47:48.351926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.999 [2024-07-26 13:47:48.424609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.999 [2024-07-26 13:47:48.424660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.999 [2024-07-26 13:47:48.424668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.999 [2024-07-26 13:47:48.424673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.999 [2024-07-26 13:47:48.424678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.999 [2024-07-26 13:47:48.424778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.999 [2024-07-26 13:47:48.424846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.999 [2024-07-26 13:47:48.424848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:21.939 [2024-07-26 13:47:49.294685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.939 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:22.199 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.458 [2024-07-26 13:47:49.668875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.458 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:22.458 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:22.718 Malloc0 00:06:22.718 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.977 Delay0 00:06:22.977 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.236 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:23.236 NULL1 00:06:23.236 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:23.496 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2803885 00:06:23.496 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:23.496 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:23.496 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.755 Read completed with error (sct=0, sc=11) 00:06:23.755 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.755 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:23.755 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:24.014 true 00:06:24.014 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:24.014 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.951 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.951 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:24.951 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:25.211 true 00:06:25.211 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:25.211 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.471 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.471 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:25.471 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:25.730 true 00:06:25.730 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:25.730 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.110 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.110 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:27.110 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:27.369 true 00:06:27.369 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:27.369 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.308 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.308 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:28.308 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:28.568 true 00:06:28.568 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:28.568 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.829 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.829 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:28.829 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:29.088 true 00:06:29.088 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:29.088 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.468 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.468 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:30.468 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:30.728 true 00:06:30.728 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:30.728 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.665 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.665 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:31.665 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:31.923 true 00:06:31.923 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:31.923 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.923 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.183 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:32.183 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:32.442 true 00:06:32.442 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:32.442 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.389 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.685 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:33.685 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:33.944 true 00:06:33.944 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:33.944 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.885 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.885 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:34.885 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:35.145 true 00:06:35.145 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:35.145 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.405 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.405 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:35.405 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:35.665 true 00:06:35.665 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:35.665 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.046 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.046 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:37.046 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:37.306 true 00:06:37.306 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:37.306 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.247 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.247 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:38.247 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:38.507 true 00:06:38.507 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:38.507 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.507 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.767 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:38.767 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:39.027 true 00:06:39.027 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:39.027 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.408 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.408 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:40.408 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:40.408 true 00:06:40.668 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:40.668 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.237 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.497 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:41.497 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:41.757 true 00:06:41.757 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:41.757 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.017 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.017 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:42.017 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:42.276 true 00:06:42.276 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:42.276 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.657 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.657 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:43.657 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:43.917 true 00:06:43.917 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:43.917 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.856 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.856 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:44.856 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:45.115 true 00:06:45.115 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:45.115 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.375 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.375 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:45.375 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:45.636 true 00:06:45.636 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:45.636 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.012 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.012 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:47.012 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:47.012 true 00:06:47.012 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:47.012 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.951 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.210 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:48.210 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:48.210 true 00:06:48.210 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:48.210 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.470 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.730 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:48.731 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:48.731 true 00:06:48.990 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:48.990 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.929 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.224 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:50.224 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:50.224 true 00:06:50.483 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:50.483 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.052 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.052 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.312 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:51.312 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:51.571 true 00:06:51.571 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:51.571 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.832 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.832 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:51.832 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:52.092 true 00:06:52.092 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:52.092 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.472 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.472 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:53.472 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:53.731 true 00:06:53.731 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:53.731 13:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.670 13:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.670 Initializing NVMe Controllers 00:06:54.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.670 Controller IO queue size 128, less than required. 00:06:54.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.670 Controller IO queue size 128, less than required. 00:06:54.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:54.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:54.670 Initialization complete. Launching workers. 00:06:54.670 ======================================================== 00:06:54.670 Latency(us) 00:06:54.670 Device Information : IOPS MiB/s Average min max 00:06:54.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1977.06 0.97 46650.62 2095.71 1083592.03 00:06:54.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17468.55 8.53 7327.47 2360.07 304998.08 00:06:54.670 ======================================================== 00:06:54.670 Total : 19445.61 9.49 11325.50 2095.71 1083592.03 00:06:54.670 00:06:54.670 13:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:54.670 13:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:54.929 true 00:06:54.929 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2803885 00:06:54.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2803885) - No such process 00:06:54.929 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2803885 00:06:54.929 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.930 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.189 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:55.189 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:55.189 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:55.189 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.189 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:55.449 null0 00:06:55.449 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.449 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.449 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:55.449 null1 00:06:55.709 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.709 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.709 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:55.709 null2 00:06:55.709 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.709 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.709 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:55.982 null3 00:06:55.982 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.982 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.982 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:56.246 null4 00:06:56.246 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.246 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.246 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:56.246 null5 00:06:56.246 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.246 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.246 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:56.505 null6 00:06:56.505 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.505 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.505 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:56.766 null7 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.766 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.767 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2810004 2810005 2810006 2810008 2810011 2810012 2810014 2810016 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.767 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.029 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.290 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.550 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.551 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.811 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.812 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.812 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.071 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.072 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.072 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.072 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.072 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.331 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.590 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.849 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.109 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.109 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.110 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.370 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.630 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.630 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.630 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.631 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.890 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.150 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.411 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.411 rmmod nvme_tcp 00:07:00.671 rmmod nvme_fabrics 00:07:00.671 rmmod nvme_keyring 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2803393 ']' 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2803393 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2803393 ']' 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2803393 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2803393 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2803393' 00:07:00.671 killing process with pid 2803393 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2803393 00:07:00.671 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2803393 00:07:00.931 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.931 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.931 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.931 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.931 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.931 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.932 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.932 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.844 00:07:02.844 real 0m47.331s 00:07:02.844 user 3m11.517s 00:07:02.844 sys 0m14.904s 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.844 ************************************ 00:07:02.844 END TEST nvmf_ns_hotplug_stress 00:07:02.844 ************************************ 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.844 ************************************ 00:07:02.844 START TEST nvmf_delete_subsystem 00:07:02.844 ************************************ 00:07:02.844 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:03.105 * Looking for test storage... 00:07:03.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.105 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:08.455 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:08.455 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:08.455 Found net devices under 0000:86:00.0: cvl_0_0 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:08.455 Found net devices under 0000:86:00.1: cvl_0_1 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.455 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:08.456 00:07:08.456 --- 10.0.0.2 ping statistics --- 00:07:08.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.456 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:07:08.456 00:07:08.456 --- 10.0.0.1 ping statistics --- 00:07:08.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.456 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2814373 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2814373 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2814373 ']' 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.456 13:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.456 [2024-07-26 13:48:35.834424] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:07:08.456 [2024-07-26 13:48:35.834468] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.456 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.716 [2024-07-26 13:48:35.891072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.716 [2024-07-26 13:48:35.971211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.716 [2024-07-26 13:48:35.971247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.716 [2024-07-26 13:48:35.971254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.716 [2024-07-26 13:48:35.971261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.716 [2024-07-26 13:48:35.971266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.716 [2024-07-26 13:48:35.971307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.716 [2024-07-26 13:48:35.971310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 [2024-07-26 13:48:36.695502] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 [2024-07-26 13:48:36.715672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.285 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:09.286 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.286 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.546 NULL1 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.546 Delay0 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2814418 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:09.546 13:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:09.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.546 [2024-07-26 13:48:36.796470] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:11.457 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.457 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.457 13:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 [2024-07-26 13:48:38.936261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0710 is same with the state(5) to be set 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 starting I/O failed: -6 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 [2024-07-26 13:48:38.936865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce1800d660 is same with the state(5) to be set 00:07:11.718 Write completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.718 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Read completed with error (sct=0, sc=8) 00:07:11.719 Write completed with error (sct=0, sc=8) 00:07:12.660 [2024-07-26 13:48:39.896996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 [2024-07-26 13:48:39.939090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0a40 is same with the state(5) to be set 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 [2024-07-26 13:48:39.939227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce1800d330 is same with the state(5) to be set 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Write completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.660 Read completed with error (sct=0, sc=8) 00:07:12.661 [2024-07-26 13:48:39.940033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0000 is same with the state(5) to be set 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 Write completed with error (sct=0, sc=8) 00:07:12.661 Read completed with error (sct=0, sc=8) 00:07:12.661 [2024-07-26 13:48:39.940278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba03e0 is same with the state(5) to be set 00:07:12.661 Initializing NVMe Controllers 00:07:12.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:12.661 Controller IO queue size 128, less than required. 00:07:12.661 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:12.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:12.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:12.661 Initialization complete. Launching workers. 00:07:12.661 ======================================================== 00:07:12.661 Latency(us) 00:07:12.661 Device Information : IOPS MiB/s Average min max 00:07:12.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.31 0.08 967523.33 557.82 1012990.44 00:07:12.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.95 0.07 900203.96 224.40 1013442.97 00:07:12.661 ======================================================== 00:07:12.661 Total : 320.26 0.16 936003.25 224.40 1013442.97 00:07:12.661 00:07:12.661 [2024-07-26 13:48:39.940782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:07:12.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:12.661 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.661 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:12.661 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2814418 00:07:12.661 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2814418 00:07:13.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2814418) - No such process 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2814418 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2814418 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2814418 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.232 [2024-07-26 13:48:40.467785] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2815111 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:13.232 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.232 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.232 [2024-07-26 13:48:40.527828] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:13.802 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.802 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:13.802 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.062 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.062 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:14.062 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.631 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.631 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:14.631 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.201 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.201 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:15.201 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.771 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.771 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:15.771 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.345 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.345 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:16.345 13:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.345 Initializing NVMe Controllers 00:07:16.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.345 Controller IO queue size 128, less than required. 00:07:16.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:16.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:16.345 Initialization complete. Launching workers. 00:07:16.345 ======================================================== 00:07:16.345 Latency(us) 00:07:16.345 Device Information : IOPS MiB/s Average min max 00:07:16.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004486.74 1000428.32 1042442.65 00:07:16.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006094.49 1000586.52 1012662.62 00:07:16.345 ======================================================== 00:07:16.345 Total : 256.00 0.12 1005290.61 1000428.32 1042442.65 00:07:16.345 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2815111 00:07:16.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2815111) - No such process 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2815111 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:16.603 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.604 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:16.604 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.604 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.604 rmmod nvme_tcp 00:07:16.604 rmmod nvme_fabrics 00:07:16.863 rmmod nvme_keyring 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2814373 ']' 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2814373 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2814373 ']' 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2814373 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2814373 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2814373' 00:07:16.863 killing process with pid 2814373 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2814373 00:07:16.863 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2814373 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.122 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.034 00:07:19.034 real 0m16.090s 00:07:19.034 user 0m30.371s 00:07:19.034 sys 0m4.919s 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.034 ************************************ 00:07:19.034 END TEST nvmf_delete_subsystem 00:07:19.034 ************************************ 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.034 ************************************ 00:07:19.034 START TEST nvmf_host_management 00:07:19.034 ************************************ 00:07:19.034 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:19.295 * Looking for test storage... 00:07:19.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.295 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.658 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:24.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:24.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:24.659 Found net devices under 0000:86:00.0: cvl_0_0 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:24.659 Found net devices under 0000:86:00.1: cvl_0_1 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:24.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:24.659 00:07:24.659 --- 10.0.0.2 ping statistics --- 00:07:24.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.659 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:07:24.659 00:07:24.659 --- 10.0.0.1 ping statistics --- 00:07:24.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.659 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2819108 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2819108 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2819108 ']' 00:07:24.659 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.660 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.660 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.660 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.660 13:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 [2024-07-26 13:48:52.003592] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:07:24.660 [2024-07-26 13:48:52.003635] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.660 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.660 [2024-07-26 13:48:52.060492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.920 [2024-07-26 13:48:52.143824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.920 [2024-07-26 13:48:52.143859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.920 [2024-07-26 13:48:52.143866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.920 [2024-07-26 13:48:52.143872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.920 [2024-07-26 13:48:52.143877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.920 [2024-07-26 13:48:52.143913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.920 [2024-07-26 13:48:52.143998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.920 [2024-07-26 13:48:52.144112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.920 [2024-07-26 13:48:52.144113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.491 [2024-07-26 13:48:52.861314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.491 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.491 Malloc0 00:07:25.491 [2024-07-26 13:48:52.920935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2819374 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2819374 /var/tmp/bdevperf.sock 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2819374 ']' 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:25.752 { 00:07:25.752 "params": { 00:07:25.752 "name": "Nvme$subsystem", 00:07:25.752 "trtype": "$TEST_TRANSPORT", 00:07:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.752 "adrfam": "ipv4", 00:07:25.752 "trsvcid": "$NVMF_PORT", 00:07:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.752 "hdgst": ${hdgst:-false}, 00:07:25.752 "ddgst": ${ddgst:-false} 00:07:25.752 }, 00:07:25.752 "method": "bdev_nvme_attach_controller" 00:07:25.752 } 00:07:25.752 EOF 00:07:25.752 )") 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:25.752 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:25.752 "params": { 00:07:25.752 "name": "Nvme0", 00:07:25.752 "trtype": "tcp", 00:07:25.752 "traddr": "10.0.0.2", 00:07:25.752 "adrfam": "ipv4", 00:07:25.752 "trsvcid": "4420", 00:07:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:25.752 "hdgst": false, 00:07:25.752 "ddgst": false 00:07:25.752 }, 00:07:25.752 "method": "bdev_nvme_attach_controller" 00:07:25.752 }' 00:07:25.752 [2024-07-26 13:48:53.013904] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:07:25.752 [2024-07-26 13:48:53.013952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819374 ] 00:07:25.752 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.752 [2024-07-26 13:48:53.069270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.752 [2024-07-26 13:48:53.142786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.332 Running I/O for 10 seconds... 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=337 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 337 -ge 100 ']' 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.599 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.599 [2024-07-26 13:48:53.908401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.599 [2024-07-26 13:48:53.908709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.908797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010580 is same with the state(5) to be set 00:07:26.600 [2024-07-26 13:48:53.909474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.600 [2024-07-26 13:48:53.909985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.600 [2024-07-26 13:48:53.909992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.601 [2024-07-26 13:48:53.910470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.601 [2024-07-26 13:48:53.910478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2567660 is same with the state(5) to be set 00:07:26.601 [2024-07-26 13:48:53.910529] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2567660 was disconnected and freed. reset controller. 00:07:26.601 [2024-07-26 13:48:53.911475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:26.601 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.601 task offset: 49152 on job bdev=Nvme0n1 fails 00:07:26.601 00:07:26.601 Latency(us) 00:07:26.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.601 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:26.601 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:26.601 Verification LBA range: start 0x0 length 0x400 00:07:26.601 Nvme0n1 : 0.44 873.39 54.59 145.56 0.00 61440.16 3647.22 64738.17 00:07:26.601 =================================================================================================================== 00:07:26.601 Total : 873.39 54.59 145.56 0.00 61440.16 3647.22 64738.17 00:07:26.601 [2024-07-26 13:48:53.913104] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.601 [2024-07-26 13:48:53.913120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2135980 (9): Bad file descriptor 00:07:26.601 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:26.601 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.601 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.602 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.602 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:26.602 [2024-07-26 13:48:53.929716] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2819374 00:07:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2819374) - No such process 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:27.596 { 00:07:27.596 "params": { 00:07:27.596 "name": "Nvme$subsystem", 00:07:27.596 "trtype": "$TEST_TRANSPORT", 00:07:27.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.596 "adrfam": "ipv4", 00:07:27.596 "trsvcid": "$NVMF_PORT", 00:07:27.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.596 "hdgst": ${hdgst:-false}, 00:07:27.596 "ddgst": ${ddgst:-false} 00:07:27.596 }, 00:07:27.596 "method": "bdev_nvme_attach_controller" 00:07:27.596 } 00:07:27.596 EOF 00:07:27.596 )") 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:27.596 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:27.596 "params": { 00:07:27.596 "name": "Nvme0", 00:07:27.596 "trtype": "tcp", 00:07:27.596 "traddr": "10.0.0.2", 00:07:27.596 "adrfam": "ipv4", 00:07:27.596 "trsvcid": "4420", 00:07:27.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:27.596 "hdgst": false, 00:07:27.596 "ddgst": false 00:07:27.596 }, 00:07:27.596 "method": "bdev_nvme_attach_controller" 00:07:27.596 }' 00:07:27.596 [2024-07-26 13:48:54.974912] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:07:27.596 [2024-07-26 13:48:54.974961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819737 ] 00:07:27.596 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.596 [2024-07-26 13:48:55.029658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.856 [2024-07-26 13:48:55.101006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.116 Running I/O for 1 seconds... 00:07:29.055 00:07:29.055 Latency(us) 00:07:29.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.055 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.055 Verification LBA range: start 0x0 length 0x400 00:07:29.055 Nvme0n1 : 1.10 934.06 58.38 0.00 0.00 65330.31 16754.42 64738.17 00:07:29.055 =================================================================================================================== 00:07:29.055 Total : 934.06 58.38 0.00 0.00 65330.31 16754.42 64738.17 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.315 rmmod nvme_tcp 00:07:29.315 rmmod nvme_fabrics 00:07:29.315 rmmod nvme_keyring 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2819108 ']' 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2819108 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2819108 ']' 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2819108 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2819108 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2819108' 00:07:29.315 killing process with pid 2819108 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2819108 00:07:29.315 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2819108 00:07:29.575 [2024-07-26 13:48:56.887743] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.575 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.118 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.118 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:32.118 00:07:32.118 real 0m12.541s 00:07:32.118 user 0m23.219s 00:07:32.118 sys 0m5.061s 00:07:32.118 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.118 13:48:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.118 ************************************ 00:07:32.118 END TEST nvmf_host_management 00:07:32.118 ************************************ 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.118 ************************************ 00:07:32.118 START TEST nvmf_lvol 00:07:32.118 ************************************ 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:32.118 * Looking for test storage... 00:07:32.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.118 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.119 13:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:37.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:37.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:37.406 Found net devices under 0000:86:00.0: cvl_0_0 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:37.406 Found net devices under 0000:86:00.1: cvl_0_1 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.406 13:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:07:37.406 00:07:37.406 --- 10.0.0.2 ping statistics --- 00:07:37.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.406 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:37.406 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:07:37.406 00:07:37.406 --- 10.0.0.1 ping statistics --- 00:07:37.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.406 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2823385 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2823385 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2823385 ']' 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:37.407 13:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.407 [2024-07-26 13:49:04.269651] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:07:37.407 [2024-07-26 13:49:04.269696] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.407 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.407 [2024-07-26 13:49:04.324819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.407 [2024-07-26 13:49:04.404998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.407 [2024-07-26 13:49:04.405033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.407 [2024-07-26 13:49:04.405040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.407 [2024-07-26 13:49:04.405049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.407 [2024-07-26 13:49:04.405070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.407 [2024-07-26 13:49:04.405109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.407 [2024-07-26 13:49:04.405203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.407 [2024-07-26 13:49:04.405204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.668 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:37.928 [2024-07-26 13:49:05.258563] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.928 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:38.188 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:38.188 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:38.448 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:38.448 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:38.448 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:38.708 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=50546fbf-4bbd-4f65-b751-60de15bf5a35 00:07:38.708 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50546fbf-4bbd-4f65-b751-60de15bf5a35 lvol 20 00:07:38.968 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c5d344da-94e3-47c4-a20d-2b014ebd793a 00:07:38.968 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.968 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c5d344da-94e3-47c4-a20d-2b014ebd793a 00:07:39.228 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:39.488 [2024-07-26 13:49:06.707638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.488 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.488 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2823876 00:07:39.488 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:39.488 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:39.748 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.686 13:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c5d344da-94e3-47c4-a20d-2b014ebd793a MY_SNAPSHOT 00:07:40.946 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=66fb3937-1805-4ba3-9464-39c9f1b936e2 00:07:40.946 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c5d344da-94e3-47c4-a20d-2b014ebd793a 30 00:07:40.946 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 66fb3937-1805-4ba3-9464-39c9f1b936e2 MY_CLONE 00:07:41.205 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bf93a977-dac8-470e-aa06-868307521ec0 00:07:41.205 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bf93a977-dac8-470e-aa06-868307521ec0 00:07:41.775 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2823876 00:07:49.904 Initializing NVMe Controllers 00:07:49.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:49.904 Controller IO queue size 128, less than required. 00:07:49.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:49.904 Initialization complete. Launching workers. 00:07:49.904 ======================================================== 00:07:49.904 Latency(us) 00:07:49.904 Device Information : IOPS MiB/s Average min max 00:07:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12314.90 48.11 10398.81 2092.16 54254.74 00:07:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12103.40 47.28 10579.00 3763.96 54913.78 00:07:49.904 ======================================================== 00:07:49.904 Total : 24418.30 95.38 10488.12 2092.16 54913.78 00:07:49.904 00:07:49.904 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.166 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c5d344da-94e3-47c4-a20d-2b014ebd793a 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50546fbf-4bbd-4f65-b751-60de15bf5a35 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.479 rmmod nvme_tcp 00:07:50.479 rmmod nvme_fabrics 00:07:50.479 rmmod nvme_keyring 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2823385 ']' 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2823385 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2823385 ']' 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2823385 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.479 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2823385 00:07:50.747 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.747 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.747 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2823385' 00:07:50.747 killing process with pid 2823385 00:07:50.747 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2823385 00:07:50.747 13:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2823385 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.747 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.291 00:07:53.291 real 0m21.145s 00:07:53.291 user 1m3.484s 00:07:53.291 sys 0m6.607s 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.291 ************************************ 00:07:53.291 END TEST nvmf_lvol 00:07:53.291 ************************************ 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.291 ************************************ 00:07:53.291 START TEST nvmf_lvs_grow 00:07:53.291 ************************************ 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:53.291 * Looking for test storage... 00:07:53.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.291 13:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.575 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.575 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:58.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:58.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:58.576 Found net devices under 0000:86:00.0: cvl_0_0 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:58.576 Found net devices under 0000:86:00.1: cvl_0_1 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:58.576 13:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:58.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:58.576 00:07:58.576 --- 10.0.0.2 ping statistics --- 00:07:58.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.576 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:07:58.576 00:07:58.576 --- 10.0.0.1 ping statistics --- 00:07:58.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.576 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:58.576 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2829057 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2829057 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2829057 ']' 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.577 13:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.577 [2024-07-26 13:49:25.333624] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:07:58.577 [2024-07-26 13:49:25.333670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.577 [2024-07-26 13:49:25.390336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.577 [2024-07-26 13:49:25.469144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.577 [2024-07-26 13:49:25.469179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.577 [2024-07-26 13:49:25.469186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.577 [2024-07-26 13:49:25.469192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.577 [2024-07-26 13:49:25.469197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.577 [2024-07-26 13:49:25.469219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.837 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.097 [2024-07-26 13:49:26.333160] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.097 ************************************ 00:07:59.097 START TEST lvs_grow_clean 00:07:59.097 ************************************ 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.097 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:59.356 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:59.356 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:59.356 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:07:59.356 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:07:59.356 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:59.616 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:59.616 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:59.616 13:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 lvol 150 00:07:59.875 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=74491f56-5e17-468a-81cc-fd9f8f6d1d40 00:07:59.875 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.875 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:59.875 [2024-07-26 13:49:27.275783] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:59.875 [2024-07-26 13:49:27.275830] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:59.875 true 00:07:59.875 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:59.875 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:00.135 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:00.135 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.395 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74491f56-5e17-468a-81cc-fd9f8f6d1d40 00:08:00.395 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.655 [2024-07-26 13:49:27.953885] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.655 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2829610 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2829610 /var/tmp/bdevperf.sock 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2829610 ']' 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.915 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:00.915 [2024-07-26 13:49:28.180086] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:00.915 [2024-07-26 13:49:28.180135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829610 ] 00:08:00.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.915 [2024-07-26 13:49:28.235075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.915 [2024-07-26 13:49:28.314927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.856 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.856 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:01.856 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:01.856 Nvme0n1 00:08:01.856 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:02.116 [ 00:08:02.116 { 00:08:02.116 "name": "Nvme0n1", 00:08:02.116 "aliases": [ 00:08:02.116 "74491f56-5e17-468a-81cc-fd9f8f6d1d40" 00:08:02.116 ], 00:08:02.116 "product_name": "NVMe disk", 00:08:02.116 "block_size": 4096, 00:08:02.116 "num_blocks": 38912, 00:08:02.116 "uuid": "74491f56-5e17-468a-81cc-fd9f8f6d1d40", 00:08:02.116 "assigned_rate_limits": { 00:08:02.116 "rw_ios_per_sec": 0, 00:08:02.116 "rw_mbytes_per_sec": 0, 00:08:02.116 "r_mbytes_per_sec": 0, 00:08:02.116 "w_mbytes_per_sec": 0 00:08:02.116 }, 00:08:02.116 "claimed": false, 00:08:02.116 "zoned": false, 00:08:02.116 "supported_io_types": { 00:08:02.116 "read": true, 00:08:02.116 "write": true, 00:08:02.116 "unmap": true, 00:08:02.116 "flush": true, 00:08:02.116 "reset": true, 00:08:02.116 "nvme_admin": true, 00:08:02.116 "nvme_io": true, 00:08:02.116 "nvme_io_md": false, 00:08:02.116 "write_zeroes": true, 00:08:02.116 "zcopy": false, 00:08:02.116 "get_zone_info": false, 00:08:02.116 "zone_management": false, 00:08:02.116 "zone_append": false, 00:08:02.116 "compare": true, 00:08:02.116 "compare_and_write": true, 00:08:02.116 "abort": true, 00:08:02.116 "seek_hole": false, 00:08:02.116 "seek_data": false, 00:08:02.116 "copy": true, 00:08:02.116 "nvme_iov_md": false 00:08:02.116 }, 00:08:02.116 "memory_domains": [ 00:08:02.116 { 00:08:02.117 "dma_device_id": "system", 00:08:02.117 "dma_device_type": 1 00:08:02.117 } 00:08:02.117 ], 00:08:02.117 "driver_specific": { 00:08:02.117 "nvme": [ 00:08:02.117 { 00:08:02.117 "trid": { 00:08:02.117 "trtype": "TCP", 00:08:02.117 "adrfam": "IPv4", 00:08:02.117 "traddr": "10.0.0.2", 00:08:02.117 "trsvcid": "4420", 00:08:02.117 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:02.117 }, 00:08:02.117 "ctrlr_data": { 00:08:02.117 "cntlid": 1, 00:08:02.117 "vendor_id": "0x8086", 00:08:02.117 "model_number": "SPDK bdev Controller", 00:08:02.117 "serial_number": "SPDK0", 00:08:02.117 "firmware_revision": "24.09", 00:08:02.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:02.117 "oacs": { 00:08:02.117 "security": 0, 00:08:02.117 "format": 0, 00:08:02.117 "firmware": 0, 00:08:02.117 "ns_manage": 0 00:08:02.117 }, 00:08:02.117 "multi_ctrlr": true, 00:08:02.117 "ana_reporting": false 00:08:02.117 }, 00:08:02.117 "vs": { 00:08:02.117 "nvme_version": "1.3" 00:08:02.117 }, 00:08:02.117 "ns_data": { 00:08:02.117 "id": 1, 00:08:02.117 "can_share": true 00:08:02.117 } 00:08:02.117 } 00:08:02.117 ], 00:08:02.117 "mp_policy": "active_passive" 00:08:02.117 } 00:08:02.117 } 00:08:02.117 ] 00:08:02.117 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2829809 00:08:02.117 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:02.117 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:02.117 Running I/O for 10 seconds... 00:08:03.497 Latency(us) 00:08:03.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.497 Nvme0n1 : 1.00 21634.00 84.51 0.00 0.00 0.00 0.00 0.00 00:08:03.497 =================================================================================================================== 00:08:03.497 Total : 21634.00 84.51 0.00 0.00 0.00 0.00 0.00 00:08:03.497 00:08:04.067 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:04.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.327 Nvme0n1 : 2.00 22123.50 86.42 0.00 0.00 0.00 0.00 0.00 00:08:04.327 =================================================================================================================== 00:08:04.327 Total : 22123.50 86.42 0.00 0.00 0.00 0.00 0.00 00:08:04.327 00:08:04.327 true 00:08:04.327 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:04.327 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:04.587 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:04.587 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:04.587 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2829809 00:08:05.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.157 Nvme0n1 : 3.00 22172.33 86.61 0.00 0.00 0.00 0.00 0.00 00:08:05.157 =================================================================================================================== 00:08:05.157 Total : 22172.33 86.61 0.00 0.00 0.00 0.00 0.00 00:08:05.157 00:08:06.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.537 Nvme0n1 : 4.00 22137.50 86.47 0.00 0.00 0.00 0.00 0.00 00:08:06.537 =================================================================================================================== 00:08:06.537 Total : 22137.50 86.47 0.00 0.00 0.00 0.00 0.00 00:08:06.537 00:08:07.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.478 Nvme0n1 : 5.00 22085.00 86.27 0.00 0.00 0.00 0.00 0.00 00:08:07.478 =================================================================================================================== 00:08:07.478 Total : 22085.00 86.27 0.00 0.00 0.00 0.00 0.00 00:08:07.478 00:08:08.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.417 Nvme0n1 : 6.00 22122.33 86.42 0.00 0.00 0.00 0.00 0.00 00:08:08.417 =================================================================================================================== 00:08:08.417 Total : 22122.33 86.42 0.00 0.00 0.00 0.00 0.00 00:08:08.417 00:08:09.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.355 Nvme0n1 : 7.00 22109.86 86.37 0.00 0.00 0.00 0.00 0.00 00:08:09.355 =================================================================================================================== 00:08:09.355 Total : 22109.86 86.37 0.00 0.00 0.00 0.00 0.00 00:08:09.355 00:08:10.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.454 Nvme0n1 : 8.00 22053.12 86.15 0.00 0.00 0.00 0.00 0.00 00:08:10.454 =================================================================================================================== 00:08:10.454 Total : 22053.12 86.15 0.00 0.00 0.00 0.00 0.00 00:08:10.454 00:08:11.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.393 Nvme0n1 : 9.00 22082.78 86.26 0.00 0.00 0.00 0.00 0.00 00:08:11.393 =================================================================================================================== 00:08:11.393 Total : 22082.78 86.26 0.00 0.00 0.00 0.00 0.00 00:08:11.393 00:08:12.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.334 Nvme0n1 : 10.00 22086.60 86.28 0.00 0.00 0.00 0.00 0.00 00:08:12.334 =================================================================================================================== 00:08:12.334 Total : 22086.60 86.28 0.00 0.00 0.00 0.00 0.00 00:08:12.334 00:08:12.334 00:08:12.334 Latency(us) 00:08:12.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.334 Nvme0n1 : 10.01 22081.02 86.25 0.00 0.00 5792.21 3390.78 33964.74 00:08:12.334 =================================================================================================================== 00:08:12.334 Total : 22081.02 86.25 0.00 0.00 5792.21 3390.78 33964.74 00:08:12.334 0 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2829610 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2829610 ']' 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2829610 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2829610 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2829610' 00:08:12.334 killing process with pid 2829610 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2829610 00:08:12.334 Received shutdown signal, test time was about 10.000000 seconds 00:08:12.334 00:08:12.334 Latency(us) 00:08:12.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.334 =================================================================================================================== 00:08:12.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:12.334 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2829610 00:08:12.594 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.595 13:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.855 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:12.855 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:13.115 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:13.115 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:13.115 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.115 [2024-07-26 13:49:40.532629] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:13.376 request: 00:08:13.376 { 00:08:13.376 "uuid": "6c5c83f7-41ff-43c0-9ef3-cd245e706403", 00:08:13.376 "method": "bdev_lvol_get_lvstores", 00:08:13.376 "req_id": 1 00:08:13.376 } 00:08:13.376 Got JSON-RPC error response 00:08:13.376 response: 00:08:13.376 { 00:08:13.376 "code": -19, 00:08:13.376 "message": "No such device" 00:08:13.376 } 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.376 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.636 aio_bdev 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 74491f56-5e17-468a-81cc-fd9f8f6d1d40 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=74491f56-5e17-468a-81cc-fd9f8f6d1d40 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.636 13:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:13.897 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74491f56-5e17-468a-81cc-fd9f8f6d1d40 -t 2000 00:08:13.897 [ 00:08:13.897 { 00:08:13.897 "name": "74491f56-5e17-468a-81cc-fd9f8f6d1d40", 00:08:13.897 "aliases": [ 00:08:13.897 "lvs/lvol" 00:08:13.897 ], 00:08:13.897 "product_name": "Logical Volume", 00:08:13.897 "block_size": 4096, 00:08:13.897 "num_blocks": 38912, 00:08:13.897 "uuid": "74491f56-5e17-468a-81cc-fd9f8f6d1d40", 00:08:13.897 "assigned_rate_limits": { 00:08:13.897 "rw_ios_per_sec": 0, 00:08:13.897 "rw_mbytes_per_sec": 0, 00:08:13.897 "r_mbytes_per_sec": 0, 00:08:13.897 "w_mbytes_per_sec": 0 00:08:13.897 }, 00:08:13.897 "claimed": false, 00:08:13.897 "zoned": false, 00:08:13.897 "supported_io_types": { 00:08:13.897 "read": true, 00:08:13.897 "write": true, 00:08:13.897 "unmap": true, 00:08:13.897 "flush": false, 00:08:13.897 "reset": true, 00:08:13.897 "nvme_admin": false, 00:08:13.897 "nvme_io": false, 00:08:13.897 "nvme_io_md": false, 00:08:13.897 "write_zeroes": true, 00:08:13.897 "zcopy": false, 00:08:13.897 "get_zone_info": false, 00:08:13.897 "zone_management": false, 00:08:13.897 "zone_append": false, 00:08:13.897 "compare": false, 00:08:13.897 "compare_and_write": false, 00:08:13.897 "abort": false, 00:08:13.897 "seek_hole": true, 00:08:13.897 "seek_data": true, 00:08:13.897 "copy": false, 00:08:13.897 "nvme_iov_md": false 00:08:13.897 }, 00:08:13.897 "driver_specific": { 00:08:13.897 "lvol": { 00:08:13.897 "lvol_store_uuid": "6c5c83f7-41ff-43c0-9ef3-cd245e706403", 00:08:13.897 "base_bdev": "aio_bdev", 00:08:13.897 "thin_provision": false, 00:08:13.897 "num_allocated_clusters": 38, 00:08:13.897 "snapshot": false, 00:08:13.897 "clone": false, 00:08:13.897 "esnap_clone": false 00:08:13.897 } 00:08:13.897 } 00:08:13.897 } 00:08:13.897 ] 00:08:13.897 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:13.897 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:13.897 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:14.158 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:14.158 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:14.158 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:14.418 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:14.418 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74491f56-5e17-468a-81cc-fd9f8f6d1d40 00:08:14.418 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c5c83f7-41ff-43c0-9ef3-cd245e706403 00:08:14.678 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.939 00:08:14.939 real 0m15.798s 00:08:14.939 user 0m15.395s 00:08:14.939 sys 0m1.535s 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 ************************************ 00:08:14.939 END TEST lvs_grow_clean 00:08:14.939 ************************************ 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 ************************************ 00:08:14.939 START TEST lvs_grow_dirty 00:08:14.939 ************************************ 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.939 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.200 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:15.200 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:15.460 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=06700b01-4964-46b5-bc10-1ee56989a823 00:08:15.460 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:15.460 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:15.460 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:15.460 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:15.460 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06700b01-4964-46b5-bc10-1ee56989a823 lvol 150 00:08:15.720 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:15.720 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.720 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.720 [2024-07-26 13:49:43.144533] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.720 [2024-07-26 13:49:43.144580] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.720 true 00:08:15.978 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:15.978 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.978 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.978 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:16.237 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:16.237 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:16.496 [2024-07-26 13:49:43.806495] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.496 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2832353 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2832353 /var/tmp/bdevperf.sock 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2832353 ']' 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.756 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 [2024-07-26 13:49:44.020864] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:16.756 [2024-07-26 13:49:44.020907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832353 ] 00:08:16.756 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.756 [2024-07-26 13:49:44.074299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.756 [2024-07-26 13:49:44.146185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.695 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.695 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:17.695 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.695 Nvme0n1 00:08:17.695 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:17.955 [ 00:08:17.955 { 00:08:17.955 "name": "Nvme0n1", 00:08:17.955 "aliases": [ 00:08:17.955 "29d97396-b512-4c9c-ac3a-664d36b0502d" 00:08:17.955 ], 00:08:17.955 "product_name": "NVMe disk", 00:08:17.955 "block_size": 4096, 00:08:17.955 "num_blocks": 38912, 00:08:17.955 "uuid": "29d97396-b512-4c9c-ac3a-664d36b0502d", 00:08:17.955 "assigned_rate_limits": { 00:08:17.955 "rw_ios_per_sec": 0, 00:08:17.955 "rw_mbytes_per_sec": 0, 00:08:17.955 "r_mbytes_per_sec": 0, 00:08:17.955 "w_mbytes_per_sec": 0 00:08:17.955 }, 00:08:17.955 "claimed": false, 00:08:17.955 "zoned": false, 00:08:17.955 "supported_io_types": { 00:08:17.955 "read": true, 00:08:17.955 "write": true, 00:08:17.955 "unmap": true, 00:08:17.955 "flush": true, 00:08:17.955 "reset": true, 00:08:17.955 "nvme_admin": true, 00:08:17.955 "nvme_io": true, 00:08:17.955 "nvme_io_md": false, 00:08:17.955 "write_zeroes": true, 00:08:17.955 "zcopy": false, 00:08:17.955 "get_zone_info": false, 00:08:17.955 "zone_management": false, 00:08:17.955 "zone_append": false, 00:08:17.955 "compare": true, 00:08:17.955 "compare_and_write": true, 00:08:17.955 "abort": true, 00:08:17.955 "seek_hole": false, 00:08:17.955 "seek_data": false, 00:08:17.955 "copy": true, 00:08:17.955 "nvme_iov_md": false 00:08:17.955 }, 00:08:17.955 "memory_domains": [ 00:08:17.955 { 00:08:17.955 "dma_device_id": "system", 00:08:17.955 "dma_device_type": 1 00:08:17.955 } 00:08:17.955 ], 00:08:17.955 "driver_specific": { 00:08:17.955 "nvme": [ 00:08:17.955 { 00:08:17.955 "trid": { 00:08:17.955 "trtype": "TCP", 00:08:17.955 "adrfam": "IPv4", 00:08:17.955 "traddr": "10.0.0.2", 00:08:17.955 "trsvcid": "4420", 00:08:17.955 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:17.955 }, 00:08:17.955 "ctrlr_data": { 00:08:17.955 "cntlid": 1, 00:08:17.955 "vendor_id": "0x8086", 00:08:17.955 "model_number": "SPDK bdev Controller", 00:08:17.955 "serial_number": "SPDK0", 00:08:17.955 "firmware_revision": "24.09", 00:08:17.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:17.955 "oacs": { 00:08:17.955 "security": 0, 00:08:17.955 "format": 0, 00:08:17.955 "firmware": 0, 00:08:17.955 "ns_manage": 0 00:08:17.955 }, 00:08:17.955 "multi_ctrlr": true, 00:08:17.955 "ana_reporting": false 00:08:17.955 }, 00:08:17.955 "vs": { 00:08:17.955 "nvme_version": "1.3" 00:08:17.955 }, 00:08:17.955 "ns_data": { 00:08:17.955 "id": 1, 00:08:17.955 "can_share": true 00:08:17.955 } 00:08:17.955 } 00:08:17.955 ], 00:08:17.955 "mp_policy": "active_passive" 00:08:17.955 } 00:08:17.955 } 00:08:17.955 ] 00:08:17.955 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2832585 00:08:17.955 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:17.955 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.955 Running I/O for 10 seconds... 00:08:19.337 Latency(us) 00:08:19.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.337 Nvme0n1 : 1.00 21537.00 84.13 0.00 0.00 0.00 0.00 0.00 00:08:19.337 =================================================================================================================== 00:08:19.337 Total : 21537.00 84.13 0.00 0.00 0.00 0.00 0.00 00:08:19.337 00:08:19.908 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:19.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.908 Nvme0n1 : 2.00 21530.00 84.10 0.00 0.00 0.00 0.00 0.00 00:08:19.908 =================================================================================================================== 00:08:19.908 Total : 21530.00 84.10 0.00 0.00 0.00 0.00 0.00 00:08:19.908 00:08:20.168 true 00:08:20.168 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:20.168 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.427 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.427 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.427 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2832585 00:08:20.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.998 Nvme0n1 : 3.00 21592.00 84.34 0.00 0.00 0.00 0.00 0.00 00:08:20.998 =================================================================================================================== 00:08:20.998 Total : 21592.00 84.34 0.00 0.00 0.00 0.00 0.00 00:08:20.998 00:08:21.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.936 Nvme0n1 : 4.00 21744.25 84.94 0.00 0.00 0.00 0.00 0.00 00:08:21.936 =================================================================================================================== 00:08:21.936 Total : 21744.25 84.94 0.00 0.00 0.00 0.00 0.00 00:08:21.936 00:08:23.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.317 Nvme0n1 : 5.00 21771.80 85.05 0.00 0.00 0.00 0.00 0.00 00:08:23.317 =================================================================================================================== 00:08:23.317 Total : 21771.80 85.05 0.00 0.00 0.00 0.00 0.00 00:08:23.317 00:08:24.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.256 Nvme0n1 : 6.00 21752.33 84.97 0.00 0.00 0.00 0.00 0.00 00:08:24.256 =================================================================================================================== 00:08:24.256 Total : 21752.33 84.97 0.00 0.00 0.00 0.00 0.00 00:08:24.256 00:08:25.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.209 Nvme0n1 : 7.00 21785.43 85.10 0.00 0.00 0.00 0.00 0.00 00:08:25.209 =================================================================================================================== 00:08:25.209 Total : 21785.43 85.10 0.00 0.00 0.00 0.00 0.00 00:08:25.209 00:08:26.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.215 Nvme0n1 : 8.00 21849.62 85.35 0.00 0.00 0.00 0.00 0.00 00:08:26.215 =================================================================================================================== 00:08:26.215 Total : 21849.62 85.35 0.00 0.00 0.00 0.00 0.00 00:08:26.215 00:08:27.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.155 Nvme0n1 : 9.00 21815.56 85.22 0.00 0.00 0.00 0.00 0.00 00:08:27.155 =================================================================================================================== 00:08:27.155 Total : 21815.56 85.22 0.00 0.00 0.00 0.00 0.00 00:08:27.155 00:08:28.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.096 Nvme0n1 : 10.00 21768.20 85.03 0.00 0.00 0.00 0.00 0.00 00:08:28.096 =================================================================================================================== 00:08:28.096 Total : 21768.20 85.03 0.00 0.00 0.00 0.00 0.00 00:08:28.096 00:08:28.096 00:08:28.096 Latency(us) 00:08:28.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.096 Nvme0n1 : 10.00 21770.86 85.04 0.00 0.00 5875.64 2649.93 31457.28 00:08:28.096 =================================================================================================================== 00:08:28.096 Total : 21770.86 85.04 0.00 0.00 5875.64 2649.93 31457.28 00:08:28.096 0 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2832353 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2832353 ']' 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2832353 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2832353 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2832353' 00:08:28.096 killing process with pid 2832353 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2832353 00:08:28.096 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.096 00:08:28.096 Latency(us) 00:08:28.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.096 =================================================================================================================== 00:08:28.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.096 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2832353 00:08:28.356 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.356 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.616 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:28.616 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2829057 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2829057 00:08:28.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2829057 Killed "${NVMF_APP[@]}" "$@" 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2834436 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2834436 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2834436 ']' 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.876 13:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.876 [2024-07-26 13:49:56.241986] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:28.876 [2024-07-26 13:49:56.242032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.876 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.876 [2024-07-26 13:49:56.297744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.136 [2024-07-26 13:49:56.377815] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.136 [2024-07-26 13:49:56.377850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.136 [2024-07-26 13:49:56.377858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.136 [2024-07-26 13:49:56.377864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.136 [2024-07-26 13:49:56.377869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.136 [2024-07-26 13:49:56.377886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.707 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.967 [2024-07-26 13:49:57.235551] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:29.967 [2024-07-26 13:49:57.235632] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:29.967 [2024-07-26 13:49:57.235657] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.967 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.227 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29d97396-b512-4c9c-ac3a-664d36b0502d -t 2000 00:08:30.227 [ 00:08:30.227 { 00:08:30.227 "name": "29d97396-b512-4c9c-ac3a-664d36b0502d", 00:08:30.227 "aliases": [ 00:08:30.227 "lvs/lvol" 00:08:30.227 ], 00:08:30.227 "product_name": "Logical Volume", 00:08:30.227 "block_size": 4096, 00:08:30.227 "num_blocks": 38912, 00:08:30.227 "uuid": "29d97396-b512-4c9c-ac3a-664d36b0502d", 00:08:30.227 "assigned_rate_limits": { 00:08:30.227 "rw_ios_per_sec": 0, 00:08:30.227 "rw_mbytes_per_sec": 0, 00:08:30.227 "r_mbytes_per_sec": 0, 00:08:30.227 "w_mbytes_per_sec": 0 00:08:30.227 }, 00:08:30.227 "claimed": false, 00:08:30.227 "zoned": false, 00:08:30.227 "supported_io_types": { 00:08:30.227 "read": true, 00:08:30.227 "write": true, 00:08:30.227 "unmap": true, 00:08:30.227 "flush": false, 00:08:30.227 "reset": true, 00:08:30.227 "nvme_admin": false, 00:08:30.227 "nvme_io": false, 00:08:30.227 "nvme_io_md": false, 00:08:30.227 "write_zeroes": true, 00:08:30.227 "zcopy": false, 00:08:30.227 "get_zone_info": false, 00:08:30.227 "zone_management": false, 00:08:30.227 "zone_append": false, 00:08:30.227 "compare": false, 00:08:30.227 "compare_and_write": false, 00:08:30.227 "abort": false, 00:08:30.227 "seek_hole": true, 00:08:30.227 "seek_data": true, 00:08:30.227 "copy": false, 00:08:30.227 "nvme_iov_md": false 00:08:30.227 }, 00:08:30.227 "driver_specific": { 00:08:30.227 "lvol": { 00:08:30.227 "lvol_store_uuid": "06700b01-4964-46b5-bc10-1ee56989a823", 00:08:30.227 "base_bdev": "aio_bdev", 00:08:30.227 "thin_provision": false, 00:08:30.227 "num_allocated_clusters": 38, 00:08:30.227 "snapshot": false, 00:08:30.227 "clone": false, 00:08:30.227 "esnap_clone": false 00:08:30.227 } 00:08:30.227 } 00:08:30.227 } 00:08:30.227 ] 00:08:30.227 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:30.227 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:30.227 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:30.563 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:30.563 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:30.563 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:30.563 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:30.563 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.823 [2024-07-26 13:49:58.083976] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:30.823 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:31.085 request: 00:08:31.085 { 00:08:31.085 "uuid": "06700b01-4964-46b5-bc10-1ee56989a823", 00:08:31.085 "method": "bdev_lvol_get_lvstores", 00:08:31.085 "req_id": 1 00:08:31.085 } 00:08:31.085 Got JSON-RPC error response 00:08:31.085 response: 00:08:31.085 { 00:08:31.085 "code": -19, 00:08:31.085 "message": "No such device" 00:08:31.085 } 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.085 aio_bdev 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.085 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.345 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29d97396-b512-4c9c-ac3a-664d36b0502d -t 2000 00:08:31.605 [ 00:08:31.605 { 00:08:31.605 "name": "29d97396-b512-4c9c-ac3a-664d36b0502d", 00:08:31.605 "aliases": [ 00:08:31.605 "lvs/lvol" 00:08:31.605 ], 00:08:31.605 "product_name": "Logical Volume", 00:08:31.605 "block_size": 4096, 00:08:31.605 "num_blocks": 38912, 00:08:31.605 "uuid": "29d97396-b512-4c9c-ac3a-664d36b0502d", 00:08:31.605 "assigned_rate_limits": { 00:08:31.605 "rw_ios_per_sec": 0, 00:08:31.605 "rw_mbytes_per_sec": 0, 00:08:31.605 "r_mbytes_per_sec": 0, 00:08:31.605 "w_mbytes_per_sec": 0 00:08:31.605 }, 00:08:31.605 "claimed": false, 00:08:31.605 "zoned": false, 00:08:31.605 "supported_io_types": { 00:08:31.605 "read": true, 00:08:31.605 "write": true, 00:08:31.605 "unmap": true, 00:08:31.605 "flush": false, 00:08:31.605 "reset": true, 00:08:31.605 "nvme_admin": false, 00:08:31.605 "nvme_io": false, 00:08:31.605 "nvme_io_md": false, 00:08:31.605 "write_zeroes": true, 00:08:31.605 "zcopy": false, 00:08:31.605 "get_zone_info": false, 00:08:31.605 "zone_management": false, 00:08:31.605 "zone_append": false, 00:08:31.605 "compare": false, 00:08:31.605 "compare_and_write": false, 00:08:31.605 "abort": false, 00:08:31.605 "seek_hole": true, 00:08:31.605 "seek_data": true, 00:08:31.605 "copy": false, 00:08:31.605 "nvme_iov_md": false 00:08:31.605 }, 00:08:31.605 "driver_specific": { 00:08:31.605 "lvol": { 00:08:31.605 "lvol_store_uuid": "06700b01-4964-46b5-bc10-1ee56989a823", 00:08:31.605 "base_bdev": "aio_bdev", 00:08:31.605 "thin_provision": false, 00:08:31.605 "num_allocated_clusters": 38, 00:08:31.605 "snapshot": false, 00:08:31.605 "clone": false, 00:08:31.605 "esnap_clone": false 00:08:31.605 } 00:08:31.605 } 00:08:31.605 } 00:08:31.605 ] 00:08:31.605 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:31.605 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:31.605 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.605 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.605 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:31.605 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:31.866 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:31.866 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 29d97396-b512-4c9c-ac3a-664d36b0502d 00:08:32.126 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06700b01-4964-46b5-bc10-1ee56989a823 00:08:32.126 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.385 00:08:32.385 real 0m17.437s 00:08:32.385 user 0m44.610s 00:08:32.385 sys 0m4.105s 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.385 ************************************ 00:08:32.385 END TEST lvs_grow_dirty 00:08:32.385 ************************************ 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:32.385 nvmf_trace.0 00:08:32.385 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.386 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.386 rmmod nvme_tcp 00:08:32.386 rmmod nvme_fabrics 00:08:32.646 rmmod nvme_keyring 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2834436 ']' 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2834436 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2834436 ']' 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2834436 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2834436 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2834436' 00:08:32.646 killing process with pid 2834436 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2834436 00:08:32.646 13:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2834436 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.646 13:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.190 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:35.190 00:08:35.190 real 0m41.862s 00:08:35.190 user 1m5.508s 00:08:35.190 sys 0m9.856s 00:08:35.190 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.190 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.190 ************************************ 00:08:35.190 END TEST nvmf_lvs_grow 00:08:35.191 ************************************ 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.191 ************************************ 00:08:35.191 START TEST nvmf_bdev_io_wait 00:08:35.191 ************************************ 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:35.191 * Looking for test storage... 00:08:35.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.191 13:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:40.474 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:40.474 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:40.474 Found net devices under 0000:86:00.0: cvl_0_0 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:40.474 Found net devices under 0000:86:00.1: cvl_0_1 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.474 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:08:40.475 00:08:40.475 --- 10.0.0.2 ping statistics --- 00:08:40.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.475 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:08:40.475 00:08:40.475 --- 10.0.0.1 ping statistics --- 00:08:40.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.475 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2838482 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2838482 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2838482 ']' 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.475 13:50:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.475 [2024-07-26 13:50:07.894492] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:40.475 [2024-07-26 13:50:07.894538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.735 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.735 [2024-07-26 13:50:07.953320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.735 [2024-07-26 13:50:08.036093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.735 [2024-07-26 13:50:08.036131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.735 [2024-07-26 13:50:08.036138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.735 [2024-07-26 13:50:08.036145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.735 [2024-07-26 13:50:08.036150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.735 [2024-07-26 13:50:08.036193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.735 [2024-07-26 13:50:08.036289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.735 [2024-07-26 13:50:08.036352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.735 [2024-07-26 13:50:08.036353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.303 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.303 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:41.303 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.303 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.303 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 [2024-07-26 13:50:08.822368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 Malloc0 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.563 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.564 [2024-07-26 13:50:08.881104] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2838733 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2838735 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.564 { 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme$subsystem", 00:08:41.564 "trtype": "$TEST_TRANSPORT", 00:08:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "$NVMF_PORT", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.564 "hdgst": ${hdgst:-false}, 00:08:41.564 "ddgst": ${ddgst:-false} 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 } 00:08:41.564 EOF 00:08:41.564 )") 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2838737 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.564 { 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme$subsystem", 00:08:41.564 "trtype": "$TEST_TRANSPORT", 00:08:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "$NVMF_PORT", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.564 "hdgst": ${hdgst:-false}, 00:08:41.564 "ddgst": ${ddgst:-false} 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 } 00:08:41.564 EOF 00:08:41.564 )") 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2838740 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.564 { 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme$subsystem", 00:08:41.564 "trtype": "$TEST_TRANSPORT", 00:08:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "$NVMF_PORT", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.564 "hdgst": ${hdgst:-false}, 00:08:41.564 "ddgst": ${ddgst:-false} 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 } 00:08:41.564 EOF 00:08:41.564 )") 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.564 { 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme$subsystem", 00:08:41.564 "trtype": "$TEST_TRANSPORT", 00:08:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "$NVMF_PORT", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.564 "hdgst": ${hdgst:-false}, 00:08:41.564 "ddgst": ${ddgst:-false} 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 } 00:08:41.564 EOF 00:08:41.564 )") 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2838733 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme1", 00:08:41.564 "trtype": "tcp", 00:08:41.564 "traddr": "10.0.0.2", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "4420", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.564 "hdgst": false, 00:08:41.564 "ddgst": false 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 }' 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme1", 00:08:41.564 "trtype": "tcp", 00:08:41.564 "traddr": "10.0.0.2", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "4420", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.564 "hdgst": false, 00:08:41.564 "ddgst": false 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 }' 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme1", 00:08:41.564 "trtype": "tcp", 00:08:41.564 "traddr": "10.0.0.2", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "4420", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.564 "hdgst": false, 00:08:41.564 "ddgst": false 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 }' 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.564 13:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.564 "params": { 00:08:41.564 "name": "Nvme1", 00:08:41.564 "trtype": "tcp", 00:08:41.564 "traddr": "10.0.0.2", 00:08:41.564 "adrfam": "ipv4", 00:08:41.564 "trsvcid": "4420", 00:08:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.564 "hdgst": false, 00:08:41.564 "ddgst": false 00:08:41.564 }, 00:08:41.564 "method": "bdev_nvme_attach_controller" 00:08:41.564 }' 00:08:41.564 [2024-07-26 13:50:08.932041] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:41.565 [2024-07-26 13:50:08.932098] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:41.565 [2024-07-26 13:50:08.932498] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:41.565 [2024-07-26 13:50:08.932541] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:41.565 [2024-07-26 13:50:08.932725] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:41.565 [2024-07-26 13:50:08.932727] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:41.565 [2024-07-26 13:50:08.932767] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 13:50:08.932767] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:41.565 --proc-type=auto ] 00:08:41.565 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.825 [2024-07-26 13:50:09.119910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.825 [2024-07-26 13:50:09.197732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.825 [2024-07-26 13:50:09.211351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.084 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.084 [2024-07-26 13:50:09.287518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:42.084 [2024-07-26 13:50:09.313500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.084 [2024-07-26 13:50:09.373845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.084 [2024-07-26 13:50:09.398050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:42.084 [2024-07-26 13:50:09.450181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:42.344 Running I/O for 1 seconds... 00:08:42.344 Running I/O for 1 seconds... 00:08:42.344 Running I/O for 1 seconds... 00:08:42.344 Running I/O for 1 seconds... 00:08:43.282 00:08:43.282 Latency(us) 00:08:43.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.282 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:43.282 Nvme1n1 : 1.02 8430.49 32.93 0.00 0.00 15057.61 6610.59 38523.77 00:08:43.282 =================================================================================================================== 00:08:43.282 Total : 8430.49 32.93 0.00 0.00 15057.61 6610.59 38523.77 00:08:43.282 00:08:43.282 Latency(us) 00:08:43.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.282 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:43.282 Nvme1n1 : 1.00 245633.85 959.51 0.00 0.00 519.36 208.36 673.17 00:08:43.282 =================================================================================================================== 00:08:43.282 Total : 245633.85 959.51 0.00 0.00 519.36 208.36 673.17 00:08:43.282 00:08:43.282 Latency(us) 00:08:43.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.282 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:43.282 Nvme1n1 : 1.01 13819.10 53.98 0.00 0.00 9231.57 3262.55 18008.15 00:08:43.282 =================================================================================================================== 00:08:43.282 Total : 13819.10 53.98 0.00 0.00 9231.57 3262.55 18008.15 00:08:43.282 00:08:43.282 Latency(us) 00:08:43.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.283 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:43.283 Nvme1n1 : 1.01 10363.86 40.48 0.00 0.00 12299.58 4074.63 19603.81 00:08:43.283 =================================================================================================================== 00:08:43.283 Total : 10363.86 40.48 0.00 0.00 12299.58 4074.63 19603.81 00:08:43.542 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2838735 00:08:43.542 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2838737 00:08:43.802 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2838740 00:08:43.802 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.802 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.802 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:43.802 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.802 13:50:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.802 rmmod nvme_tcp 00:08:43.802 rmmod nvme_fabrics 00:08:43.802 rmmod nvme_keyring 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2838482 ']' 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2838482 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2838482 ']' 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2838482 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2838482 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2838482' 00:08:43.802 killing process with pid 2838482 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2838482 00:08:43.802 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2838482 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.062 13:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:46.025 00:08:46.025 real 0m11.143s 00:08:46.025 user 0m20.363s 00:08:46.025 sys 0m5.846s 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.025 ************************************ 00:08:46.025 END TEST nvmf_bdev_io_wait 00:08:46.025 ************************************ 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.025 ************************************ 00:08:46.025 START TEST nvmf_queue_depth 00:08:46.025 ************************************ 00:08:46.025 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:46.285 * Looking for test storage... 00:08:46.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.285 13:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:51.569 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:51.569 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:51.569 Found net devices under 0000:86:00.0: cvl_0_0 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:51.569 Found net devices under 0000:86:00.1: cvl_0_1 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.569 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.570 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:51.831 00:08:51.831 --- 10.0.0.2 ping statistics --- 00:08:51.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.831 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:08:51.831 00:08:51.831 --- 10.0.0.1 ping statistics --- 00:08:51.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.831 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2842603 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2842603 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2842603 ']' 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.831 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.831 [2024-07-26 13:50:19.176666] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:51.831 [2024-07-26 13:50:19.176712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.831 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.831 [2024-07-26 13:50:19.236467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.091 [2024-07-26 13:50:19.314217] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.091 [2024-07-26 13:50:19.314256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.091 [2024-07-26 13:50:19.314268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.091 [2024-07-26 13:50:19.314274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.091 [2024-07-26 13:50:19.314279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.091 [2024-07-26 13:50:19.314296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.661 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.661 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:52.661 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.661 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.661 13:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.661 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.662 [2024-07-26 13:50:20.012976] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.662 Malloc0 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.662 [2024-07-26 13:50:20.068434] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2842771 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2842771 /var/tmp/bdevperf.sock 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2842771 ']' 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.662 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.922 [2024-07-26 13:50:20.116872] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:08:52.922 [2024-07-26 13:50:20.116913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842771 ] 00:08:52.922 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.922 [2024-07-26 13:50:20.171944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.922 [2024-07-26 13:50:20.249437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.492 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.492 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:53.492 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:53.492 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.492 13:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.752 NVMe0n1 00:08:53.752 13:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.752 13:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.011 Running I/O for 10 seconds... 00:09:04.011 00:09:04.011 Latency(us) 00:09:04.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:04.011 Verification LBA range: start 0x0 length 0x4000 00:09:04.011 NVMe0n1 : 10.06 11904.23 46.50 0.00 0.00 85748.34 20743.57 67017.68 00:09:04.011 =================================================================================================================== 00:09:04.011 Total : 11904.23 46.50 0.00 0.00 85748.34 20743.57 67017.68 00:09:04.011 0 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2842771 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2842771 ']' 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2842771 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2842771 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2842771' 00:09:04.011 killing process with pid 2842771 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2842771 00:09:04.011 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.011 00:09:04.011 Latency(us) 00:09:04.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.011 =================================================================================================================== 00:09:04.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.011 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2842771 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.272 rmmod nvme_tcp 00:09:04.272 rmmod nvme_fabrics 00:09:04.272 rmmod nvme_keyring 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2842603 ']' 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2842603 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2842603 ']' 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2842603 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2842603 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2842603' 00:09:04.272 killing process with pid 2842603 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2842603 00:09:04.272 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2842603 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.532 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.090 00:09:07.090 real 0m20.506s 00:09:07.090 user 0m25.016s 00:09:07.090 sys 0m5.766s 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 ************************************ 00:09:07.090 END TEST nvmf_queue_depth 00:09:07.090 ************************************ 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.090 13:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 ************************************ 00:09:07.090 START TEST nvmf_target_multipath 00:09:07.090 ************************************ 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:07.090 * Looking for test storage... 00:09:07.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.090 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.374 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:12.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:12.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:12.375 Found net devices under 0000:86:00.0: cvl_0_0 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:12.375 Found net devices under 0000:86:00.1: cvl_0_1 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:09:12.375 00:09:12.375 --- 10.0.0.2 ping statistics --- 00:09:12.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.375 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:09:12.375 00:09:12.375 --- 10.0.0.1 ping statistics --- 00:09:12.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.375 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:12.375 only one NIC for nvmf test 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:12.375 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.376 rmmod nvme_tcp 00:09:12.376 rmmod nvme_fabrics 00:09:12.376 rmmod nvme_keyring 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.376 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.324 00:09:14.324 real 0m7.668s 00:09:14.324 user 0m1.554s 00:09:14.324 sys 0m4.117s 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:14.324 ************************************ 00:09:14.324 END TEST nvmf_target_multipath 00:09:14.324 ************************************ 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.324 13:50:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.325 ************************************ 00:09:14.325 START TEST nvmf_zcopy 00:09:14.325 ************************************ 00:09:14.325 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:14.585 * Looking for test storage... 00:09:14.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.585 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:14.586 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:19.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:19.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:19.867 Found net devices under 0000:86:00.0: cvl_0_0 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:19.867 Found net devices under 0000:86:00.1: cvl_0_1 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:19.867 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:19.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:09:19.867 00:09:19.867 --- 10.0.0.2 ping statistics --- 00:09:19.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.867 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:09:19.867 00:09:19.867 --- 10.0.0.1 ping statistics --- 00:09:19.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.867 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:19.867 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2851646 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2851646 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2851646 ']' 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.868 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.868 [2024-07-26 13:50:47.167000] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:09:19.868 [2024-07-26 13:50:47.167057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.868 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.868 [2024-07-26 13:50:47.225518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.868 [2024-07-26 13:50:47.295939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.868 [2024-07-26 13:50:47.295980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.868 [2024-07-26 13:50:47.295986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.868 [2024-07-26 13:50:47.295992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.868 [2024-07-26 13:50:47.295997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.868 [2024-07-26 13:50:47.296015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 [2024-07-26 13:50:47.995240] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.808 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 [2024-07-26 13:50:48.015409] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 malloc0 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:20.808 { 00:09:20.808 "params": { 00:09:20.808 "name": "Nvme$subsystem", 00:09:20.808 "trtype": "$TEST_TRANSPORT", 00:09:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.808 "adrfam": "ipv4", 00:09:20.808 "trsvcid": "$NVMF_PORT", 00:09:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.808 "hdgst": ${hdgst:-false}, 00:09:20.808 "ddgst": ${ddgst:-false} 00:09:20.808 }, 00:09:20.808 "method": "bdev_nvme_attach_controller" 00:09:20.808 } 00:09:20.808 EOF 00:09:20.808 )") 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:20.808 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:20.808 "params": { 00:09:20.808 "name": "Nvme1", 00:09:20.808 "trtype": "tcp", 00:09:20.808 "traddr": "10.0.0.2", 00:09:20.808 "adrfam": "ipv4", 00:09:20.808 "trsvcid": "4420", 00:09:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.808 "hdgst": false, 00:09:20.808 "ddgst": false 00:09:20.808 }, 00:09:20.808 "method": "bdev_nvme_attach_controller" 00:09:20.808 }' 00:09:20.808 [2024-07-26 13:50:48.111862] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:09:20.808 [2024-07-26 13:50:48.111905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851683 ] 00:09:20.808 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.808 [2024-07-26 13:50:48.166282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.808 [2024-07-26 13:50:48.240474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.377 Running I/O for 10 seconds... 00:09:31.422 00:09:31.422 Latency(us) 00:09:31.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.422 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:31.422 Verification LBA range: start 0x0 length 0x1000 00:09:31.422 Nvme1n1 : 10.01 7406.63 57.86 0.00 0.00 17235.31 1410.45 54936.26 00:09:31.422 =================================================================================================================== 00:09:31.422 Total : 7406.63 57.86 0.00 0.00 17235.31 1410.45 54936.26 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2853512 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:31.422 { 00:09:31.422 "params": { 00:09:31.422 "name": "Nvme$subsystem", 00:09:31.422 "trtype": "$TEST_TRANSPORT", 00:09:31.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.422 "adrfam": "ipv4", 00:09:31.422 "trsvcid": "$NVMF_PORT", 00:09:31.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.422 "hdgst": ${hdgst:-false}, 00:09:31.422 "ddgst": ${ddgst:-false} 00:09:31.422 }, 00:09:31.422 "method": "bdev_nvme_attach_controller" 00:09:31.422 } 00:09:31.422 EOF 00:09:31.422 )") 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:31.422 [2024-07-26 13:50:58.787799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.787841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:31.422 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:31.422 "params": { 00:09:31.422 "name": "Nvme1", 00:09:31.422 "trtype": "tcp", 00:09:31.422 "traddr": "10.0.0.2", 00:09:31.422 "adrfam": "ipv4", 00:09:31.422 "trsvcid": "4420", 00:09:31.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.422 "hdgst": false, 00:09:31.422 "ddgst": false 00:09:31.422 }, 00:09:31.422 "method": "bdev_nvme_attach_controller" 00:09:31.422 }' 00:09:31.422 [2024-07-26 13:50:58.799794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.799807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.422 [2024-07-26 13:50:58.807811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.807821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.422 [2024-07-26 13:50:58.819842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.819852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.422 [2024-07-26 13:50:58.824855] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:09:31.422 [2024-07-26 13:50:58.824898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853512 ] 00:09:31.422 [2024-07-26 13:50:58.831874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.831884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.422 [2024-07-26 13:50:58.843904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.843914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.422 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.422 [2024-07-26 13:50:58.855936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.422 [2024-07-26 13:50:58.855946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.681 [2024-07-26 13:50:58.867969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.681 [2024-07-26 13:50:58.867979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.681 [2024-07-26 13:50:58.877702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.681 [2024-07-26 13:50:58.880003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.681 [2024-07-26 13:50:58.880013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.681 [2024-07-26 13:50:58.892039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.681 [2024-07-26 13:50:58.892057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.681 [2024-07-26 13:50:58.904087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.681 [2024-07-26 13:50:58.904098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.681 [2024-07-26 13:50:58.916119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.916137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:58.928136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.928150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:58.940167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.940176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:58.952203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.952230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:58.952428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.682 [2024-07-26 13:50:58.964242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.964260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:58.976272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.976288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:58.988311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:58.988322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.000339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.000350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.012361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.012372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.024390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.024400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.036422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.036432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.048473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.048493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.060499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.060513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.072531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.072545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.084557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.084567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.096586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.096596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.682 [2024-07-26 13:50:59.108625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.682 [2024-07-26 13:50:59.108638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.120660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.120674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.132688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.132699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.144724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.144734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.156753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.156766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.168793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.168807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.180825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.180837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.192857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.192867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.204891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.204902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.216921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.216933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.228957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.228967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.240995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.241008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.253027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.253038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.265071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.265090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.277097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.277109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 Running I/O for 5 seconds... 00:09:31.941 [2024-07-26 13:50:59.302020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.302040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.317532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.317553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.332078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.332098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.347358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.347379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.361574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.361593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.941 [2024-07-26 13:50:59.372619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.941 [2024-07-26 13:50:59.372638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.380732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.380751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.395610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.395630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.407420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.407440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.423384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.423411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.441010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.441031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.455427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.455447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.469457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.469476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.483829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.483848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.493434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.493453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.510177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.510198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.525600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.525619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.535111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.535130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.549612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.549631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.567076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.567095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.575277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.575296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.589839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.589859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.604933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.604952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.621001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.621019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.201 [2024-07-26 13:50:59.631862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.201 [2024-07-26 13:50:59.631881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.647167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.647186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.663190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.663210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.680427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.680446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.695651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.695674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.710254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.710273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.728309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.728328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.737626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.737645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.751845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.751863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.765391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.765410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.781247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.781265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.791322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.791341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.803274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.803292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.817128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.817147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.831343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.831362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.845451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.845470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.861261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.861280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.877424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.877444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.460 [2024-07-26 13:50:59.893714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.460 [2024-07-26 13:50:59.893735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.903526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.903547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.917716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.917736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.930403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.930421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.942611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.942629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.957537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.957559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.972221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.972240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.987129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.987148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:50:59.999706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:50:59.999724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.008266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.008285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.017537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.017557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.028574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.028595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.040282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.040302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.049533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.049552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.059660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.059678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.067065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.067084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.077160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.077179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.086183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.086203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.094694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.094715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.102188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.102207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.114601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.114619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.123448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.123466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.134503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.134521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.144980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.144999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.720 [2024-07-26 13:51:00.154303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.720 [2024-07-26 13:51:00.154326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.165649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.165668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.174778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.174796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.184098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.184117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.191942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.191961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.200290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.200309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.210340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.210359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.219127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.219145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.228175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.228194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.234900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.234918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.243590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.243609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.251937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.251955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.262961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.262980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.270413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.270431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.280871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.280890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.289764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.289782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.297202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.297220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.307130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.307149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.316024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.316048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.324825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.324843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.332280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.332298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.343493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.343511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.353885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.353904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.360936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.360954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.370983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.371002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.380442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.380460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.387126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.387144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.397360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.397379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.979 [2024-07-26 13:51:00.406642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.979 [2024-07-26 13:51:00.406660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.237 [2024-07-26 13:51:00.415417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.237 [2024-07-26 13:51:00.415436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.237 [2024-07-26 13:51:00.423151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.237 [2024-07-26 13:51:00.423170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.237 [2024-07-26 13:51:00.432587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.237 [2024-07-26 13:51:00.432605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.237 [2024-07-26 13:51:00.440474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.237 [2024-07-26 13:51:00.440491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.237 [2024-07-26 13:51:00.450424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.237 [2024-07-26 13:51:00.450443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.237 [2024-07-26 13:51:00.460131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.237 [2024-07-26 13:51:00.460149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.467908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.467926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.475952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.475970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.486689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.486708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.495814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.495832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.503484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.503502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.513361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.513380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.521306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.521325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.530488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.530507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.539593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.539612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.549435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.549455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.558129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.558148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.566008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.566027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.575769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.575788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.583798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.583817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.591222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.591241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.601713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.601731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.609405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.609424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.619570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.619589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.627295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.627314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.636647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.636668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.644878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.644897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.654086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.654105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.662160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.662178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.238 [2024-07-26 13:51:00.671464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.238 [2024-07-26 13:51:00.671483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.680358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.680377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.689705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.689723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.697350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.697369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.705339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.705358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.716411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.716430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.724845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.724865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.731933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.731952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.742009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.742028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.749301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.749319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.759505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.759523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.767323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.767341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.775625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.775643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.786051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.786070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.793508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.793529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.802429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.497 [2024-07-26 13:51:00.802448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.497 [2024-07-26 13:51:00.812420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.812438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.819767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.819788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.831248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.831266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.841552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.841570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.850281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.850300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.857591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.857608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.867957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.867976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.875582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.875600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.884211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.884229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.893904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.893922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.903257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.903275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.913537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.913557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.498 [2024-07-26 13:51:00.922937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.498 [2024-07-26 13:51:00.922956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:00.937602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:00.937622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:00.948715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:00.948733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:00.957407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:00.957425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:00.966247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:00.966266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:00.975352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:00.975370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:00.990383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:00.990401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.006516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.006535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.014406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.014428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.029112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.029131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.044468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.044486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.053655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.053672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.068377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.068396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.082053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.082071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.090256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.090274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.103735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.103753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.111000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.111018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.126238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.126256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.137355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.137374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.146513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.146531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.160934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.160957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.174617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.174635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.758 [2024-07-26 13:51:01.188642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.758 [2024-07-26 13:51:01.188660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.201788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.201807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.211345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.211363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.225745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.225764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.240013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.240031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.250625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.250648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.264967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.264985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.272408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.272426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.017 [2024-07-26 13:51:01.288459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.017 [2024-07-26 13:51:01.288478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.302175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.302194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.315731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.315749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.332733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.332752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.340113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.340132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.355671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.355690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.366839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.366856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.380809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.380828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.395231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.395250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.405460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.405478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.419660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.419679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.427632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.427650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.018 [2024-07-26 13:51:01.442908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.018 [2024-07-26 13:51:01.442926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.459573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.277 [2024-07-26 13:51:01.459593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.475933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.277 [2024-07-26 13:51:01.475955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.485937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.277 [2024-07-26 13:51:01.485956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.495146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.277 [2024-07-26 13:51:01.495168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.510348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.277 [2024-07-26 13:51:01.510367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.525423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.277 [2024-07-26 13:51:01.525441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.277 [2024-07-26 13:51:01.539235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.539253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.553510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.553529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.567400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.567419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.578879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.578898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.595580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.595600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.610931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.610950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.625686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.625707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.633516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.633535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.648270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.648289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.659146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.659164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.676415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.676434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.687122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.687140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.278 [2024-07-26 13:51:01.698565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.278 [2024-07-26 13:51:01.698583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.714590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.714610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.728185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.728204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.742386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.742404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.759071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.759094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.770577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.770595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.789002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.789020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.800334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.800352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.809418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.809437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.818455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.818474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.832975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.832993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.846022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.846047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.855540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.855558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.868035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.868058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.881796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.881815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.896366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.896386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.909124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.909143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.917959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.917977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.926707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.926725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.942384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.942403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.957783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.957803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.537 [2024-07-26 13:51:01.973673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.537 [2024-07-26 13:51:01.973694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:01.988323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:01.988342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:01.999007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:01.999026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.008448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.008467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.017620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.017639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.032849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.032868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.043350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.043368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.058671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.058690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.075183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.075201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.085338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.085357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.101015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.101034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.116920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.116939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.127885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.127903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.136818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.136836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.153380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.153400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.798 [2024-07-26 13:51:02.170898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.798 [2024-07-26 13:51:02.170916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.799 [2024-07-26 13:51:02.179024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.799 [2024-07-26 13:51:02.179048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.799 [2024-07-26 13:51:02.195839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.799 [2024-07-26 13:51:02.195859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.799 [2024-07-26 13:51:02.203909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.799 [2024-07-26 13:51:02.203927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.799 [2024-07-26 13:51:02.217678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.799 [2024-07-26 13:51:02.217696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.799 [2024-07-26 13:51:02.233055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.799 [2024-07-26 13:51:02.233074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.243436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.243455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.252083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.252101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.267093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.267112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.277958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.277978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.294013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.294031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.309209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.309228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.322606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.322625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.336620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.336639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.058 [2024-07-26 13:51:02.350609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.058 [2024-07-26 13:51:02.350627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.361578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.361595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.370468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.370486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.385935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.385954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.395786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.395804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.404730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.404747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.417878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.417896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.431623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.431641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.446324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.446343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.461382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.461400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.475122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.475141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.059 [2024-07-26 13:51:02.487796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.059 [2024-07-26 13:51:02.487813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.498298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.498319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.513031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.513058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.526279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.526298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.538005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.538023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.554307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.554326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.569556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.569575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.583976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.583995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.593730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.593748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.608889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.608908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.625750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.625768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.635572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.635590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.650025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.650050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.664090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.664110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.677182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.677200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.691622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.691641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.702494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.702512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.716631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.716649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.731275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.731301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.318 [2024-07-26 13:51:02.746956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.318 [2024-07-26 13:51:02.746976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.759865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.759885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.776168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.776187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.791519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.791538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.803195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.803213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.818558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.818576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.834485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.834502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.845397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.845415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.861123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.861141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.876442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.876461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.886588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.886605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.902962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.902980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.920305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.920323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.934446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.934465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.945707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.945726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.960557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.960577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.977909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.977928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:02.995375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.577 [2024-07-26 13:51:02.995393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.577 [2024-07-26 13:51:03.010061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.578 [2024-07-26 13:51:03.010085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.022848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.022868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.034498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.034516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.048453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.048472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.057173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.057191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.071791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.071810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.082774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.082792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.090978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.090997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.105960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.105980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.117175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.117193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.131820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.131839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.142978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.142997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.157418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.157437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.171767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.171786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.187246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.187264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.196127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.196145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.211108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.211127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.222525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.222543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.237108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.237127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.248372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.248393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.262610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.262629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.837 [2024-07-26 13:51:03.270113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.837 [2024-07-26 13:51:03.270131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.280557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.280577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.295334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.295354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.307433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.307452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.322055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.322075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.333413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.333432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.347852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.347870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.361880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.361899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.375198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.375216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.389382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.389400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.403776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.403795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.414174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.414193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.431084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.431103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.440745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.440764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.455565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.455583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.466677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.466696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.481167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.481187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.492660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.492682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.501995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.502013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.511404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.511423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.097 [2024-07-26 13:51:03.520226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.097 [2024-07-26 13:51:03.520245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.534995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.535015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.545850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.545868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.555027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.555052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.564230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.564248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.579060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.579078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.592785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.592804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.606035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.606062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.621416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.621435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.638006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.638026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.653674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.653693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.665362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.665380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.679586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.679604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.693896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.693914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.705176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.705195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.357 [2024-07-26 13:51:03.713673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.357 [2024-07-26 13:51:03.713691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.358 [2024-07-26 13:51:03.729205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.358 [2024-07-26 13:51:03.729227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.358 [2024-07-26 13:51:03.744625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.358 [2024-07-26 13:51:03.744643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.358 [2024-07-26 13:51:03.754148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.358 [2024-07-26 13:51:03.754165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.358 [2024-07-26 13:51:03.769337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.358 [2024-07-26 13:51:03.769355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.358 [2024-07-26 13:51:03.786866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.358 [2024-07-26 13:51:03.786884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.800921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.800941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.811200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.811218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.824791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.824809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.839442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.839460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.852019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.852037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.865476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.865494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.878797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.878816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.896553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.896572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.911125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.911143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.921762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.921783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.937120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.937139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.953409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.953427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.968924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.968943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.618 [2024-07-26 13:51:03.978491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.618 [2024-07-26 13:51:03.978511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.619 [2024-07-26 13:51:03.994812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.619 [2024-07-26 13:51:03.994831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.619 [2024-07-26 13:51:04.009960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.619 [2024-07-26 13:51:04.009979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.619 [2024-07-26 13:51:04.025958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.619 [2024-07-26 13:51:04.025976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.619 [2024-07-26 13:51:04.042033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.619 [2024-07-26 13:51:04.042057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.057371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.057391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.069428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.069446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.085732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.085751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.105759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.105777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.119271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.119290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.127429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.127446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.137177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.137196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.157424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.157443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.168976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.168994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.179138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.179157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.194105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.194124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.210787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.210805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.220609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.220626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.231907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.231925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.249161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.249183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.265953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.265972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.280720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.280738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.293462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.293480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 00:09:36.910 Latency(us) 00:09:36.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.910 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:36.910 Nvme1n1 : 5.01 15093.94 117.92 0.00 0.00 8471.41 2094.30 33052.94 00:09:36.910 =================================================================================================================== 00:09:36.910 Total : 15093.94 117.92 0.00 0.00 8471.41 2094.30 33052.94 00:09:36.910 [2024-07-26 13:51:04.310911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.310929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.318925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.318935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.330971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.330992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.910 [2024-07-26 13:51:04.342997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.910 [2024-07-26 13:51:04.343011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.355027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.355045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.367063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.367077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.379096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.379110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.391120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.391134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.403149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.403161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.415180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.415189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.427216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.427227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.439246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.439256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.451280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.451289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.463315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.463327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.475343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.475352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 [2024-07-26 13:51:04.487376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.170 [2024-07-26 13:51:04.487385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2853512) - No such process 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2853512 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.170 delay0 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.170 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:37.170 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.431 [2024-07-26 13:51:04.620383] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:44.008 Initializing NVMe Controllers 00:09:44.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:44.008 Initialization complete. Launching workers. 00:09:44.008 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:09:44.008 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 52 00:09:44.008 success 197, unsuccess 175, failed 0 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.008 rmmod nvme_tcp 00:09:44.008 rmmod nvme_fabrics 00:09:44.008 rmmod nvme_keyring 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2851646 ']' 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2851646 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2851646 ']' 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2851646 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2851646 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2851646' 00:09:44.008 killing process with pid 2851646 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2851646 00:09:44.008 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2851646 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.008 13:51:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.919 00:09:45.919 real 0m31.465s 00:09:45.919 user 0m43.174s 00:09:45.919 sys 0m10.215s 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.919 ************************************ 00:09:45.919 END TEST nvmf_zcopy 00:09:45.919 ************************************ 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.919 ************************************ 00:09:45.919 START TEST nvmf_nmic 00:09:45.919 ************************************ 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:45.919 * Looking for test storage... 00:09:45.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.919 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.179 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.462 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.462 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.462 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.462 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.462 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:09:51.463 00:09:51.463 --- 10.0.0.2 ping statistics --- 00:09:51.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.463 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:09:51.463 00:09:51.463 --- 10.0.0.1 ping statistics --- 00:09:51.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.463 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2859474 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2859474 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2859474 ']' 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.463 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.463 [2024-07-26 13:51:18.702803] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:09:51.463 [2024-07-26 13:51:18.702844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.463 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.463 [2024-07-26 13:51:18.759675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.463 [2024-07-26 13:51:18.841676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.463 [2024-07-26 13:51:18.841711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.463 [2024-07-26 13:51:18.841718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.463 [2024-07-26 13:51:18.841724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.463 [2024-07-26 13:51:18.841730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.463 [2024-07-26 13:51:18.841773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.463 [2024-07-26 13:51:18.842018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.463 [2024-07-26 13:51:18.842084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.463 [2024-07-26 13:51:18.842086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 [2024-07-26 13:51:19.559313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 Malloc0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 [2024-07-26 13:51:19.611115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:52.403 test case1: single bdev can't be used in multiple subsystems 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 [2024-07-26 13:51:19.635035] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:52.403 [2024-07-26 13:51:19.635056] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:52.403 [2024-07-26 13:51:19.635063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.403 request: 00:09:52.403 { 00:09:52.403 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:52.403 "namespace": { 00:09:52.403 "bdev_name": "Malloc0", 00:09:52.403 "no_auto_visible": false 00:09:52.403 }, 00:09:52.403 "method": "nvmf_subsystem_add_ns", 00:09:52.403 "req_id": 1 00:09:52.403 } 00:09:52.403 Got JSON-RPC error response 00:09:52.403 response: 00:09:52.403 { 00:09:52.403 "code": -32602, 00:09:52.403 "message": "Invalid parameters" 00:09:52.403 } 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:52.403 Adding namespace failed - expected result. 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:52.403 test case2: host connect to nvmf target in multiple paths 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.403 [2024-07-26 13:51:19.647148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.403 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.343 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:54.725 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.725 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.725 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.725 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:54.725 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:56.634 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.634 [global] 00:09:56.634 thread=1 00:09:56.634 invalidate=1 00:09:56.634 rw=write 00:09:56.634 time_based=1 00:09:56.634 runtime=1 00:09:56.634 ioengine=libaio 00:09:56.634 direct=1 00:09:56.634 bs=4096 00:09:56.634 iodepth=1 00:09:56.634 norandommap=0 00:09:56.634 numjobs=1 00:09:56.634 00:09:56.634 verify_dump=1 00:09:56.634 verify_backlog=512 00:09:56.634 verify_state_save=0 00:09:56.634 do_verify=1 00:09:56.634 verify=crc32c-intel 00:09:56.634 [job0] 00:09:56.634 filename=/dev/nvme0n1 00:09:56.634 Could not set queue depth (nvme0n1) 00:09:56.894 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.894 fio-3.35 00:09:56.894 Starting 1 thread 00:09:58.277 00:09:58.277 job0: (groupid=0, jobs=1): err= 0: pid=2860505: Fri Jul 26 13:51:25 2024 00:09:58.277 read: IOPS=18, BW=75.8KiB/s (77.7kB/s)(76.0KiB/1002msec) 00:09:58.277 slat (nsec): min=9641, max=23945, avg=22296.11, stdev=3080.71 00:09:58.277 clat (usec): min=41405, max=42973, avg=42103.68, stdev=392.62 00:09:58.277 lat (usec): min=41415, max=42996, avg=42125.97, stdev=393.89 00:09:58.277 clat percentiles (usec): 00:09:58.277 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:09:58.277 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:58.277 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:58.277 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:58.277 | 99.99th=[42730] 00:09:58.277 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:58.277 slat (usec): min=9, max=26691, avg=63.85, stdev=1179.10 00:09:58.277 clat (usec): min=249, max=1282, avg=323.41, stdev=149.86 00:09:58.277 lat (usec): min=259, max=27668, avg=387.27, stdev=1217.37 00:09:58.277 clat percentiles (usec): 00:09:58.277 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 255], 20.00th=[ 260], 00:09:58.277 | 30.00th=[ 262], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:09:58.277 | 70.00th=[ 273], 80.00th=[ 314], 90.00th=[ 494], 95.00th=[ 725], 00:09:58.277 | 99.00th=[ 906], 99.50th=[ 971], 99.90th=[ 1287], 99.95th=[ 1287], 00:09:58.277 | 99.99th=[ 1287] 00:09:58.277 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.277 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.277 lat (usec) : 250=0.19%, 500=87.19%, 750=5.08%, 1000=3.77% 00:09:58.277 lat (msec) : 2=0.19%, 50=3.58% 00:09:58.277 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=2 00:09:58.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.277 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.277 00:09:58.277 Run status group 0 (all jobs): 00:09:58.277 READ: bw=75.8KiB/s (77.7kB/s), 75.8KiB/s-75.8KiB/s (77.7kB/s-77.7kB/s), io=76.0KiB (77.8kB), run=1002-1002msec 00:09:58.277 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:09:58.277 00:09:58.277 Disk stats (read/write): 00:09:58.277 nvme0n1: ios=41/512, merge=0/0, ticks=1641/161, in_queue=1802, util=98.60% 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.277 rmmod nvme_tcp 00:09:58.277 rmmod nvme_fabrics 00:09:58.277 rmmod nvme_keyring 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2859474 ']' 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2859474 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2859474 ']' 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2859474 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2859474 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2859474' 00:09:58.277 killing process with pid 2859474 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2859474 00:09:58.277 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2859474 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.536 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.481 13:51:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.481 00:10:00.481 real 0m14.638s 00:10:00.481 user 0m34.615s 00:10:00.481 sys 0m4.590s 00:10:00.481 13:51:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.481 13:51:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.481 ************************************ 00:10:00.481 END TEST nvmf_nmic 00:10:00.481 ************************************ 00:10:00.742 13:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:00.742 13:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.742 13:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.742 13:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.742 ************************************ 00:10:00.742 START TEST nvmf_fio_target 00:10:00.742 ************************************ 00:10:00.742 13:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:00.742 * Looking for test storage... 00:10:00.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.742 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.027 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.027 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.027 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:10:06.287 00:10:06.287 --- 10.0.0.2 ping statistics --- 00:10:06.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.287 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:10:06.287 00:10:06.287 --- 10.0.0.1 ping statistics --- 00:10:06.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.287 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2864247 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2864247 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2864247 ']' 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.287 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 [2024-07-26 13:51:33.607013] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:10:06.287 [2024-07-26 13:51:33.607060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.287 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.287 [2024-07-26 13:51:33.664879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.548 [2024-07-26 13:51:33.746304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.548 [2024-07-26 13:51:33.746342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.548 [2024-07-26 13:51:33.746349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.548 [2024-07-26 13:51:33.746356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.548 [2024-07-26 13:51:33.746361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.548 [2024-07-26 13:51:33.746407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.548 [2024-07-26 13:51:33.746646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.548 [2024-07-26 13:51:33.746715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.548 [2024-07-26 13:51:33.746716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.117 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.377 [2024-07-26 13:51:34.616827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.377 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.637 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:07.637 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.637 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:07.637 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.897 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:07.897 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.157 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:08.157 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:08.416 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.416 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:08.416 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.676 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:08.676 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.936 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:08.936 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:08.936 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:09.195 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:09.195 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.456 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:09.456 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.717 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.717 [2024-07-26 13:51:37.094522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.717 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:09.977 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:10.237 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.179 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:11.179 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.179 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.179 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:11.179 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:11.179 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:13.719 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:13.719 [global] 00:10:13.719 thread=1 00:10:13.719 invalidate=1 00:10:13.719 rw=write 00:10:13.719 time_based=1 00:10:13.719 runtime=1 00:10:13.719 ioengine=libaio 00:10:13.719 direct=1 00:10:13.719 bs=4096 00:10:13.719 iodepth=1 00:10:13.719 norandommap=0 00:10:13.719 numjobs=1 00:10:13.719 00:10:13.719 verify_dump=1 00:10:13.719 verify_backlog=512 00:10:13.719 verify_state_save=0 00:10:13.719 do_verify=1 00:10:13.719 verify=crc32c-intel 00:10:13.719 [job0] 00:10:13.719 filename=/dev/nvme0n1 00:10:13.719 [job1] 00:10:13.719 filename=/dev/nvme0n2 00:10:13.719 [job2] 00:10:13.719 filename=/dev/nvme0n3 00:10:13.719 [job3] 00:10:13.719 filename=/dev/nvme0n4 00:10:13.719 Could not set queue depth (nvme0n1) 00:10:13.719 Could not set queue depth (nvme0n2) 00:10:13.719 Could not set queue depth (nvme0n3) 00:10:13.719 Could not set queue depth (nvme0n4) 00:10:13.719 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.719 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.719 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.719 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.719 fio-3.35 00:10:13.719 Starting 4 threads 00:10:15.108 00:10:15.108 job0: (groupid=0, jobs=1): err= 0: pid=2865692: Fri Jul 26 13:51:42 2024 00:10:15.108 read: IOPS=321, BW=1285KiB/s (1316kB/s)(1320KiB/1027msec) 00:10:15.108 slat (nsec): min=6565, max=36123, avg=12570.17, stdev=7370.70 00:10:15.108 clat (usec): min=463, max=43290, avg=2583.84, stdev=8631.99 00:10:15.108 lat (usec): min=470, max=43312, avg=2596.41, stdev=8634.28 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 494], 20.00th=[ 506], 00:10:15.108 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 693], 00:10:15.108 | 70.00th=[ 791], 80.00th=[ 1057], 90.00th=[ 1205], 95.00th=[ 1450], 00:10:15.108 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:10:15.108 | 99.99th=[43254] 00:10:15.108 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:15.108 slat (nsec): min=9156, max=71603, avg=10567.13, stdev=3554.99 00:10:15.108 clat (usec): min=252, max=885, avg=315.12, stdev=111.38 00:10:15.108 lat (usec): min=263, max=931, avg=325.69, stdev=112.05 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[ 253], 5.00th=[ 258], 10.00th=[ 258], 20.00th=[ 260], 00:10:15.108 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 265], 60.00th=[ 277], 00:10:15.108 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 437], 95.00th=[ 603], 00:10:15.108 | 99.00th=[ 750], 99.50th=[ 750], 99.90th=[ 889], 99.95th=[ 889], 00:10:15.108 | 99.99th=[ 889] 00:10:15.108 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.108 lat (usec) : 500=61.52%, 750=23.40%, 1000=6.18% 00:10:15.108 lat (msec) : 2=7.13%, 50=1.78% 00:10:15.108 cpu : usr=0.58%, sys=0.88%, ctx=843, majf=0, minf=1 00:10:15.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 issued rwts: total=330,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.108 job1: (groupid=0, jobs=1): err= 0: pid=2865707: Fri Jul 26 13:51:42 2024 00:10:15.108 read: IOPS=19, BW=77.4KiB/s (79.2kB/s)(80.0KiB/1034msec) 00:10:15.108 slat (nsec): min=9760, max=32868, avg=22329.05, stdev=3770.76 00:10:15.108 clat (usec): min=41288, max=42944, avg=41991.07, stdev=279.21 00:10:15.108 lat (usec): min=41298, max=42966, avg=42013.40, stdev=280.82 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:15.108 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:15.108 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:15.108 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:15.108 | 99.99th=[42730] 00:10:15.108 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:10:15.108 slat (nsec): min=9546, max=40620, avg=10884.63, stdev=2216.37 00:10:15.108 clat (usec): min=251, max=1114, avg=362.77, stdev=107.54 00:10:15.108 lat (usec): min=261, max=1126, avg=373.65, stdev=108.13 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 285], 00:10:15.108 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 363], 00:10:15.108 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 461], 95.00th=[ 611], 00:10:15.108 | 99.00th=[ 709], 99.50th=[ 832], 99.90th=[ 1123], 99.95th=[ 1123], 00:10:15.108 | 99.99th=[ 1123] 00:10:15.108 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.108 lat (usec) : 500=88.53%, 750=7.14%, 1000=0.38% 00:10:15.108 lat (msec) : 2=0.19%, 50=3.76% 00:10:15.108 cpu : usr=0.29%, sys=0.58%, ctx=534, majf=0, minf=1 00:10:15.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.108 job2: (groupid=0, jobs=1): err= 0: pid=2865720: Fri Jul 26 13:51:42 2024 00:10:15.108 read: IOPS=70, BW=281KiB/s (288kB/s)(288KiB/1025msec) 00:10:15.108 slat (nsec): min=8388, max=35541, avg=21297.64, stdev=5561.18 00:10:15.108 clat (usec): min=669, max=42489, avg=11191.09, stdev=17897.12 00:10:15.108 lat (usec): min=692, max=42512, avg=11212.39, stdev=17897.63 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[ 668], 5.00th=[ 791], 10.00th=[ 824], 20.00th=[ 840], 00:10:15.108 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[ 898], 60.00th=[ 1123], 00:10:15.108 | 70.00th=[ 1237], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:15.108 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:15.108 | 99.99th=[42730] 00:10:15.108 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:10:15.108 slat (nsec): min=10728, max=53127, avg=13559.01, stdev=4058.20 00:10:15.108 clat (usec): min=255, max=1063, avg=404.13, stdev=134.70 00:10:15.108 lat (usec): min=267, max=1076, avg=417.69, stdev=135.73 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[ 258], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 306], 00:10:15.108 | 30.00th=[ 326], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 404], 00:10:15.108 | 70.00th=[ 424], 80.00th=[ 474], 90.00th=[ 553], 95.00th=[ 701], 00:10:15.108 | 99.00th=[ 947], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:15.108 | 99.99th=[ 1057] 00:10:15.108 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.108 lat (usec) : 500=74.32%, 750=11.13%, 1000=9.08% 00:10:15.108 lat (msec) : 2=2.40%, 50=3.08% 00:10:15.108 cpu : usr=0.20%, sys=0.98%, ctx=586, majf=0, minf=1 00:10:15.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.108 job3: (groupid=0, jobs=1): err= 0: pid=2865725: Fri Jul 26 13:51:42 2024 00:10:15.108 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:10:15.108 slat (nsec): min=10217, max=24314, avg=22166.85, stdev=3848.39 00:10:15.108 clat (usec): min=41568, max=42099, avg=41956.59, stdev=113.09 00:10:15.108 lat (usec): min=41578, max=42112, avg=41978.76, stdev=114.60 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:15.108 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:15.108 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:15.108 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.108 | 99.99th=[42206] 00:10:15.108 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:15.108 slat (nsec): min=11268, max=40679, avg=12809.26, stdev=2041.76 00:10:15.108 clat (usec): min=253, max=898, avg=313.45, stdev=101.04 00:10:15.108 lat (usec): min=265, max=938, avg=326.26, stdev=101.46 00:10:15.108 clat percentiles (usec): 00:10:15.108 | 1.00th=[ 255], 5.00th=[ 258], 10.00th=[ 260], 20.00th=[ 262], 00:10:15.108 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:15.108 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 424], 95.00th=[ 562], 00:10:15.108 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 898], 99.95th=[ 898], 00:10:15.108 | 99.99th=[ 898] 00:10:15.108 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.108 lat (usec) : 500=89.47%, 750=6.58%, 1000=0.19% 00:10:15.108 lat (msec) : 50=3.76% 00:10:15.108 cpu : usr=0.50%, sys=0.89%, ctx=534, majf=0, minf=2 00:10:15.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.108 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.108 00:10:15.108 Run status group 0 (all jobs): 00:10:15.108 READ: bw=1710KiB/s (1751kB/s), 77.4KiB/s-1285KiB/s (79.2kB/s-1316kB/s), io=1768KiB (1810kB), run=1010-1034msec 00:10:15.108 WRITE: bw=7923KiB/s (8113kB/s), 1981KiB/s-2028KiB/s (2028kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1034msec 00:10:15.108 00:10:15.108 Disk stats (read/write): 00:10:15.108 nvme0n1: ios=375/512, merge=0/0, ticks=701/157, in_queue=858, util=89.18% 00:10:15.108 nvme0n2: ios=38/512, merge=0/0, ticks=1595/188, in_queue=1783, util=98.88% 00:10:15.108 nvme0n3: ios=90/512, merge=0/0, ticks=1561/207, in_queue=1768, util=98.85% 00:10:15.108 nvme0n4: ios=39/512, merge=0/0, ticks=1638/154, in_queue=1792, util=98.74% 00:10:15.108 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:15.108 [global] 00:10:15.109 thread=1 00:10:15.109 invalidate=1 00:10:15.109 rw=randwrite 00:10:15.109 time_based=1 00:10:15.109 runtime=1 00:10:15.109 ioengine=libaio 00:10:15.109 direct=1 00:10:15.109 bs=4096 00:10:15.109 iodepth=1 00:10:15.109 norandommap=0 00:10:15.109 numjobs=1 00:10:15.109 00:10:15.109 verify_dump=1 00:10:15.109 verify_backlog=512 00:10:15.109 verify_state_save=0 00:10:15.109 do_verify=1 00:10:15.109 verify=crc32c-intel 00:10:15.109 [job0] 00:10:15.109 filename=/dev/nvme0n1 00:10:15.109 [job1] 00:10:15.109 filename=/dev/nvme0n2 00:10:15.109 [job2] 00:10:15.109 filename=/dev/nvme0n3 00:10:15.109 [job3] 00:10:15.109 filename=/dev/nvme0n4 00:10:15.109 Could not set queue depth (nvme0n1) 00:10:15.109 Could not set queue depth (nvme0n2) 00:10:15.109 Could not set queue depth (nvme0n3) 00:10:15.109 Could not set queue depth (nvme0n4) 00:10:15.366 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.366 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.366 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.367 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.367 fio-3.35 00:10:15.367 Starting 4 threads 00:10:16.744 00:10:16.744 job0: (groupid=0, jobs=1): err= 0: pid=2866171: Fri Jul 26 13:51:43 2024 00:10:16.744 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:16.744 slat (nsec): min=6648, max=27541, avg=7533.21, stdev=1028.74 00:10:16.744 clat (usec): min=475, max=839, avg=582.15, stdev=26.45 00:10:16.744 lat (usec): min=482, max=847, avg=589.69, stdev=26.43 00:10:16.744 clat percentiles (usec): 00:10:16.744 | 1.00th=[ 502], 5.00th=[ 537], 10.00th=[ 562], 20.00th=[ 570], 00:10:16.744 | 30.00th=[ 578], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 586], 00:10:16.744 | 70.00th=[ 594], 80.00th=[ 594], 90.00th=[ 603], 95.00th=[ 611], 00:10:16.744 | 99.00th=[ 652], 99.50th=[ 685], 99.90th=[ 816], 99.95th=[ 840], 00:10:16.744 | 99.99th=[ 840] 00:10:16.744 write: IOPS=1206, BW=4827KiB/s (4943kB/s)(4832KiB/1001msec); 0 zone resets 00:10:16.744 slat (nsec): min=9043, max=34432, avg=10838.76, stdev=3096.72 00:10:16.744 clat (usec): min=252, max=1322, avg=312.79, stdev=108.30 00:10:16.744 lat (usec): min=262, max=1333, avg=323.63, stdev=110.13 00:10:16.744 clat percentiles (usec): 00:10:16.744 | 1.00th=[ 258], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 265], 00:10:16.744 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:16.744 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 363], 95.00th=[ 553], 00:10:16.744 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 1106], 99.95th=[ 1319], 00:10:16.744 | 99.99th=[ 1319] 00:10:16.744 bw ( KiB/s): min= 4311, max= 4311, per=25.58%, avg=4311.00, stdev= 0.00, samples=1 00:10:16.744 iops : min= 1077, max= 1077, avg=1077.00, stdev= 0.00, samples=1 00:10:16.744 lat (usec) : 500=51.30%, 750=47.67%, 1000=0.94% 00:10:16.744 lat (msec) : 2=0.09% 00:10:16.744 cpu : usr=0.90%, sys=2.40%, ctx=2234, majf=0, minf=1 00:10:16.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.744 issued rwts: total=1024,1208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.744 job1: (groupid=0, jobs=1): err= 0: pid=2866188: Fri Jul 26 13:51:43 2024 00:10:16.744 read: IOPS=636, BW=2547KiB/s (2608kB/s)(2552KiB/1002msec) 00:10:16.744 slat (nsec): min=6284, max=23997, avg=7200.62, stdev=1122.13 00:10:16.744 clat (usec): min=489, max=3170, avg=614.24, stdev=133.52 00:10:16.744 lat (usec): min=496, max=3177, avg=621.44, stdev=133.69 00:10:16.744 clat percentiles (usec): 00:10:16.744 | 1.00th=[ 498], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 586], 00:10:16.744 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 594], 60.00th=[ 594], 00:10:16.744 | 70.00th=[ 603], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 807], 00:10:16.744 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 3163], 99.95th=[ 3163], 00:10:16.744 | 99.99th=[ 3163] 00:10:16.744 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:10:16.744 slat (nsec): min=8994, max=69364, avg=10598.74, stdev=3152.93 00:10:16.744 clat (usec): min=378, max=1090, avg=576.05, stdev=67.19 00:10:16.744 lat (usec): min=388, max=1101, avg=586.65, stdev=68.49 00:10:16.744 clat percentiles (usec): 00:10:16.744 | 1.00th=[ 383], 5.00th=[ 482], 10.00th=[ 545], 20.00th=[ 570], 00:10:16.744 | 30.00th=[ 578], 40.00th=[ 578], 50.00th=[ 578], 60.00th=[ 578], 00:10:16.744 | 70.00th=[ 586], 80.00th=[ 586], 90.00th=[ 586], 95.00th=[ 611], 00:10:16.744 | 99.00th=[ 857], 99.50th=[ 930], 99.90th=[ 1020], 99.95th=[ 1090], 00:10:16.744 | 99.99th=[ 1090] 00:10:16.744 bw ( KiB/s): min= 4087, max= 4087, per=24.25%, avg=4087.00, stdev= 0.00, samples=1 00:10:16.744 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:16.744 lat (usec) : 500=6.68%, 750=89.05%, 1000=3.67% 00:10:16.744 lat (msec) : 2=0.54%, 4=0.06% 00:10:16.744 cpu : usr=0.80%, sys=1.60%, ctx=1663, majf=0, minf=1 00:10:16.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.744 issued rwts: total=638,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.744 job2: (groupid=0, jobs=1): err= 0: pid=2866189: Fri Jul 26 13:51:43 2024 00:10:16.744 read: IOPS=594, BW=2378KiB/s (2435kB/s)(2416KiB/1016msec) 00:10:16.744 slat (nsec): min=6699, max=26431, avg=7519.66, stdev=1227.02 00:10:16.744 clat (usec): min=492, max=42029, avg=677.69, stdev=1687.17 00:10:16.744 lat (usec): min=499, max=42039, avg=685.21, stdev=1687.26 00:10:16.744 clat percentiles (usec): 00:10:16.744 | 1.00th=[ 498], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 586], 00:10:16.744 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 594], 60.00th=[ 594], 00:10:16.744 | 70.00th=[ 594], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 807], 00:10:16.744 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[42206], 99.95th=[42206], 00:10:16.744 | 99.99th=[42206] 00:10:16.744 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:10:16.744 slat (nsec): min=9532, max=47945, avg=11339.72, stdev=3651.27 00:10:16.744 clat (usec): min=301, max=1091, avg=572.16, stdev=68.86 00:10:16.744 lat (usec): min=312, max=1102, avg=583.50, stdev=70.38 00:10:16.744 clat percentiles (usec): 00:10:16.744 | 1.00th=[ 371], 5.00th=[ 478], 10.00th=[ 490], 20.00th=[ 570], 00:10:16.744 | 30.00th=[ 578], 40.00th=[ 578], 50.00th=[ 578], 60.00th=[ 578], 00:10:16.744 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 586], 95.00th=[ 611], 00:10:16.744 | 99.00th=[ 840], 99.50th=[ 881], 99.90th=[ 1012], 99.95th=[ 1090], 00:10:16.744 | 99.99th=[ 1090] 00:10:16.744 bw ( KiB/s): min= 4087, max= 4096, per=24.28%, avg=4091.50, stdev= 6.36, samples=2 00:10:16.744 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:10:16.744 lat (usec) : 500=7.80%, 750=88.57%, 1000=3.26% 00:10:16.744 lat (msec) : 2=0.31%, 50=0.06% 00:10:16.744 cpu : usr=1.08%, sys=1.38%, ctx=1630, majf=0, minf=2 00:10:16.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.745 issued rwts: total=604,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.745 job3: (groupid=0, jobs=1): err= 0: pid=2866191: Fri Jul 26 13:51:43 2024 00:10:16.745 read: IOPS=617, BW=2470KiB/s (2529kB/s)(2472KiB/1001msec) 00:10:16.745 slat (nsec): min=3121, max=25527, avg=7288.03, stdev=1347.01 00:10:16.745 clat (usec): min=475, max=3264, avg=642.10, stdev=169.95 00:10:16.745 lat (usec): min=482, max=3272, avg=649.39, stdev=169.94 00:10:16.745 clat percentiles (usec): 00:10:16.745 | 1.00th=[ 494], 5.00th=[ 570], 10.00th=[ 586], 20.00th=[ 586], 00:10:16.745 | 30.00th=[ 594], 40.00th=[ 594], 50.00th=[ 594], 60.00th=[ 603], 00:10:16.745 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 791], 95.00th=[ 971], 00:10:16.745 | 99.00th=[ 988], 99.50th=[ 1090], 99.90th=[ 3261], 99.95th=[ 3261], 00:10:16.745 | 99.99th=[ 3261] 00:10:16.745 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:16.745 slat (nsec): min=9047, max=38694, avg=10579.48, stdev=2470.36 00:10:16.745 clat (usec): min=375, max=1073, avg=570.38, stdev=64.86 00:10:16.745 lat (usec): min=387, max=1083, avg=580.96, stdev=65.35 00:10:16.745 clat percentiles (usec): 00:10:16.745 | 1.00th=[ 383], 5.00th=[ 420], 10.00th=[ 545], 20.00th=[ 570], 00:10:16.745 | 30.00th=[ 578], 40.00th=[ 578], 50.00th=[ 578], 60.00th=[ 578], 00:10:16.745 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 586], 95.00th=[ 603], 00:10:16.745 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 1057], 99.95th=[ 1074], 00:10:16.745 | 99.99th=[ 1074] 00:10:16.745 bw ( KiB/s): min= 4096, max= 4096, per=24.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:16.745 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:16.745 lat (usec) : 500=6.64%, 750=87.76%, 1000=5.24% 00:10:16.745 lat (msec) : 2=0.24%, 4=0.12% 00:10:16.745 cpu : usr=0.70%, sys=1.70%, ctx=1643, majf=0, minf=1 00:10:16.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.745 issued rwts: total=618,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.745 00:10:16.745 Run status group 0 (all jobs): 00:10:16.745 READ: bw=11.1MiB/s (11.6MB/s), 2378KiB/s-4092KiB/s (2435kB/s-4190kB/s), io=11.3MiB (11.8MB), run=1001-1016msec 00:10:16.745 WRITE: bw=16.5MiB/s (17.3MB/s), 4031KiB/s-4827KiB/s (4128kB/s-4943kB/s), io=16.7MiB (17.5MB), run=1001-1016msec 00:10:16.745 00:10:16.745 Disk stats (read/write): 00:10:16.745 nvme0n1: ios=856/1024, merge=0/0, ticks=1281/317, in_queue=1598, util=98.20% 00:10:16.745 nvme0n2: ios=542/855, merge=0/0, ticks=346/487, in_queue=833, util=84.89% 00:10:16.745 nvme0n3: ios=535/861, merge=0/0, ticks=1232/488, in_queue=1720, util=98.70% 00:10:16.745 nvme0n4: ios=512/803, merge=0/0, ticks=321/461, in_queue=782, util=89.10% 00:10:16.745 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:16.745 [global] 00:10:16.745 thread=1 00:10:16.745 invalidate=1 00:10:16.745 rw=write 00:10:16.745 time_based=1 00:10:16.745 runtime=1 00:10:16.745 ioengine=libaio 00:10:16.745 direct=1 00:10:16.745 bs=4096 00:10:16.745 iodepth=128 00:10:16.745 norandommap=0 00:10:16.745 numjobs=1 00:10:16.745 00:10:16.745 verify_dump=1 00:10:16.745 verify_backlog=512 00:10:16.745 verify_state_save=0 00:10:16.745 do_verify=1 00:10:16.745 verify=crc32c-intel 00:10:16.745 [job0] 00:10:16.745 filename=/dev/nvme0n1 00:10:16.745 [job1] 00:10:16.745 filename=/dev/nvme0n2 00:10:16.745 [job2] 00:10:16.745 filename=/dev/nvme0n3 00:10:16.745 [job3] 00:10:16.745 filename=/dev/nvme0n4 00:10:16.745 Could not set queue depth (nvme0n1) 00:10:16.745 Could not set queue depth (nvme0n2) 00:10:16.745 Could not set queue depth (nvme0n3) 00:10:16.745 Could not set queue depth (nvme0n4) 00:10:16.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.745 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.745 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.745 fio-3.35 00:10:16.745 Starting 4 threads 00:10:18.126 00:10:18.126 job0: (groupid=0, jobs=1): err= 0: pid=2866567: Fri Jul 26 13:51:45 2024 00:10:18.126 read: IOPS=3186, BW=12.4MiB/s (13.1MB/s)(12.5MiB/1004msec) 00:10:18.126 slat (nsec): min=1012, max=17965k, avg=144050.55, stdev=912539.37 00:10:18.126 clat (usec): min=963, max=54918, avg=19169.16, stdev=11338.95 00:10:18.126 lat (usec): min=1866, max=54923, avg=19313.21, stdev=11386.61 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 3097], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10159], 00:10:18.126 | 30.00th=[11994], 40.00th=[13566], 50.00th=[15795], 60.00th=[17433], 00:10:18.126 | 70.00th=[21627], 80.00th=[27657], 90.00th=[37487], 95.00th=[45351], 00:10:18.126 | 99.00th=[53216], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:10:18.126 | 99.99th=[54789] 00:10:18.126 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:18.126 slat (nsec): min=1875, max=30412k, avg=140618.66, stdev=887243.60 00:10:18.126 clat (usec): min=1403, max=63557, avg=16884.68, stdev=8387.89 00:10:18.126 lat (usec): min=1414, max=63595, avg=17025.30, stdev=8447.31 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 2868], 5.00th=[ 5866], 10.00th=[ 7439], 20.00th=[ 9765], 00:10:18.126 | 30.00th=[12125], 40.00th=[14091], 50.00th=[16188], 60.00th=[17433], 00:10:18.126 | 70.00th=[19792], 80.00th=[22414], 90.00th=[28705], 95.00th=[33162], 00:10:18.126 | 99.00th=[42206], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:10:18.126 | 99.99th=[63701] 00:10:18.126 bw ( KiB/s): min= 9872, max=18792, per=23.44%, avg=14332.00, stdev=6307.39, samples=2 00:10:18.126 iops : min= 2468, max= 4698, avg=3583.00, stdev=1576.85, samples=2 00:10:18.126 lat (usec) : 1000=0.01% 00:10:18.126 lat (msec) : 2=0.40%, 4=1.81%, 10=18.74%, 20=48.12%, 50=29.87% 00:10:18.126 lat (msec) : 100=1.05% 00:10:18.126 cpu : usr=1.69%, sys=2.79%, ctx=474, majf=0, minf=1 00:10:18.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:18.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.126 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.126 job1: (groupid=0, jobs=1): err= 0: pid=2866568: Fri Jul 26 13:51:45 2024 00:10:18.126 read: IOPS=3833, BW=15.0MiB/s (15.7MB/s)(15.2MiB/1014msec) 00:10:18.126 slat (nsec): min=1112, max=14216k, avg=101015.29, stdev=707056.39 00:10:18.126 clat (usec): min=2507, max=34757, avg=14634.14, stdev=4981.77 00:10:18.126 lat (usec): min=2529, max=34963, avg=14735.16, stdev=5014.80 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 4228], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[10552], 00:10:18.126 | 30.00th=[11731], 40.00th=[12911], 50.00th=[14353], 60.00th=[15270], 00:10:18.126 | 70.00th=[16581], 80.00th=[18482], 90.00th=[21365], 95.00th=[24249], 00:10:18.126 | 99.00th=[27132], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:10:18.126 | 99.99th=[34866] 00:10:18.126 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:10:18.126 slat (nsec): min=1966, max=30387k, avg=140664.93, stdev=913045.69 00:10:18.126 clat (usec): min=3139, max=63070, avg=16546.82, stdev=10166.13 00:10:18.126 lat (usec): min=3153, max=63085, avg=16687.48, stdev=10224.92 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 4686], 5.00th=[ 6587], 10.00th=[ 8356], 20.00th=[10159], 00:10:18.126 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13042], 60.00th=[15008], 00:10:18.126 | 70.00th=[16909], 80.00th=[19792], 90.00th=[28967], 95.00th=[36963], 00:10:18.126 | 99.00th=[58459], 99.50th=[62129], 99.90th=[63177], 99.95th=[63177], 00:10:18.126 | 99.99th=[63177] 00:10:18.126 bw ( KiB/s): min=15152, max=17616, per=26.80%, avg=16384.00, stdev=1742.31, samples=2 00:10:18.126 iops : min= 3788, max= 4404, avg=4096.00, stdev=435.58, samples=2 00:10:18.126 lat (msec) : 4=0.38%, 10=17.49%, 20=65.33%, 50=15.37%, 100=1.44% 00:10:18.126 cpu : usr=3.26%, sys=2.76%, ctx=485, majf=0, minf=1 00:10:18.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:18.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.126 issued rwts: total=3887,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.126 job2: (groupid=0, jobs=1): err= 0: pid=2866569: Fri Jul 26 13:51:45 2024 00:10:18.126 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:10:18.126 slat (nsec): min=1067, max=11936k, avg=122025.28, stdev=777058.25 00:10:18.126 clat (usec): min=4047, max=79741, avg=18506.39, stdev=11269.44 00:10:18.126 lat (usec): min=4050, max=79743, avg=18628.42, stdev=11302.33 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 7177], 5.00th=[ 8029], 10.00th=[ 9634], 20.00th=[11338], 00:10:18.126 | 30.00th=[13173], 40.00th=[14353], 50.00th=[16581], 60.00th=[17957], 00:10:18.126 | 70.00th=[20055], 80.00th=[21627], 90.00th=[26084], 95.00th=[37487], 00:10:18.126 | 99.00th=[71828], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:10:18.126 | 99.99th=[80217] 00:10:18.126 write: IOPS=3616, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1010msec); 0 zone resets 00:10:18.126 slat (nsec): min=1995, max=59004k, avg=150016.01, stdev=1337155.35 00:10:18.126 clat (usec): min=1258, max=71941, avg=15870.16, stdev=7442.98 00:10:18.126 lat (usec): min=5092, max=71949, avg=16020.18, stdev=7511.61 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 5211], 5.00th=[ 7308], 10.00th=[ 9372], 20.00th=[11076], 00:10:18.126 | 30.00th=[11994], 40.00th=[13566], 50.00th=[14746], 60.00th=[16057], 00:10:18.126 | 70.00th=[18220], 80.00th=[20317], 90.00th=[22414], 95.00th=[25035], 00:10:18.126 | 99.00th=[31851], 99.50th=[70779], 99.90th=[70779], 99.95th=[71828], 00:10:18.126 | 99.99th=[71828] 00:10:18.126 bw ( KiB/s): min=12288, max=16384, per=23.45%, avg=14336.00, stdev=2896.31, samples=2 00:10:18.126 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:18.126 lat (msec) : 2=0.01%, 4=0.01%, 10=11.70%, 20=62.66%, 50=23.84% 00:10:18.126 lat (msec) : 100=1.77% 00:10:18.126 cpu : usr=1.29%, sys=3.57%, ctx=432, majf=0, minf=1 00:10:18.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:18.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.126 issued rwts: total=3584,3653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.126 job3: (groupid=0, jobs=1): err= 0: pid=2866570: Fri Jul 26 13:51:45 2024 00:10:18.126 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:10:18.126 slat (nsec): min=1108, max=22250k, avg=111560.27, stdev=813620.52 00:10:18.126 clat (usec): min=1911, max=60164, avg=15544.76, stdev=6964.62 00:10:18.126 lat (usec): min=1930, max=60174, avg=15656.32, stdev=7021.52 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 4621], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11076], 00:10:18.126 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13960], 60.00th=[15008], 00:10:18.126 | 70.00th=[16450], 80.00th=[18482], 90.00th=[24511], 95.00th=[27657], 00:10:18.126 | 99.00th=[45876], 99.50th=[55313], 99.90th=[60031], 99.95th=[60031], 00:10:18.126 | 99.99th=[60031] 00:10:18.126 write: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(16.3MiB/1013msec); 0 zone resets 00:10:18.126 slat (nsec): min=1872, max=8159.0k, avg=118501.75, stdev=570930.55 00:10:18.126 clat (usec): min=2755, max=60122, avg=15550.88, stdev=8439.96 00:10:18.126 lat (usec): min=3301, max=60128, avg=15669.38, stdev=8481.23 00:10:18.126 clat percentiles (usec): 00:10:18.126 | 1.00th=[ 4146], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[10159], 00:10:18.126 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13566], 60.00th=[14746], 00:10:18.126 | 70.00th=[16188], 80.00th=[18220], 90.00th=[23725], 95.00th=[38536], 00:10:18.126 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:10:18.126 | 99.99th=[60031] 00:10:18.126 bw ( KiB/s): min=12432, max=20336, per=26.80%, avg=16384.00, stdev=5588.97, samples=2 00:10:18.126 iops : min= 3108, max= 5084, avg=4096.00, stdev=1397.24, samples=2 00:10:18.126 lat (msec) : 2=0.04%, 4=0.42%, 10=14.04%, 20=70.45%, 50=14.59% 00:10:18.126 lat (msec) : 100=0.46% 00:10:18.126 cpu : usr=1.98%, sys=3.85%, ctx=575, majf=0, minf=1 00:10:18.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:18.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.126 issued rwts: total=4096,4165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.126 00:10:18.126 Run status group 0 (all jobs): 00:10:18.126 READ: bw=56.9MiB/s (59.6MB/s), 12.4MiB/s-15.8MiB/s (13.1MB/s-16.6MB/s), io=57.7MiB (60.5MB), run=1004-1014msec 00:10:18.126 WRITE: bw=59.7MiB/s (62.6MB/s), 13.9MiB/s-16.1MiB/s (14.6MB/s-16.8MB/s), io=60.5MiB (63.5MB), run=1004-1014msec 00:10:18.126 00:10:18.126 Disk stats (read/write): 00:10:18.127 nvme0n1: ios=2181/2560, merge=0/0, ticks=17968/24813, in_queue=42781, util=89.88% 00:10:18.127 nvme0n2: ios=2984/3072, merge=0/0, ticks=35900/45029, in_queue=80929, util=93.74% 00:10:18.127 nvme0n3: ios=3093/3291, merge=0/0, ticks=28679/28886, in_queue=57565, util=97.08% 00:10:18.127 nvme0n4: ios=3629/3654, merge=0/0, ticks=41229/35915, in_queue=77144, util=97.80% 00:10:18.127 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:18.127 [global] 00:10:18.127 thread=1 00:10:18.127 invalidate=1 00:10:18.127 rw=randwrite 00:10:18.127 time_based=1 00:10:18.127 runtime=1 00:10:18.127 ioengine=libaio 00:10:18.127 direct=1 00:10:18.127 bs=4096 00:10:18.127 iodepth=128 00:10:18.127 norandommap=0 00:10:18.127 numjobs=1 00:10:18.127 00:10:18.127 verify_dump=1 00:10:18.127 verify_backlog=512 00:10:18.127 verify_state_save=0 00:10:18.127 do_verify=1 00:10:18.127 verify=crc32c-intel 00:10:18.127 [job0] 00:10:18.127 filename=/dev/nvme0n1 00:10:18.127 [job1] 00:10:18.127 filename=/dev/nvme0n2 00:10:18.127 [job2] 00:10:18.127 filename=/dev/nvme0n3 00:10:18.127 [job3] 00:10:18.127 filename=/dev/nvme0n4 00:10:18.127 Could not set queue depth (nvme0n1) 00:10:18.127 Could not set queue depth (nvme0n2) 00:10:18.127 Could not set queue depth (nvme0n3) 00:10:18.127 Could not set queue depth (nvme0n4) 00:10:18.386 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.386 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.386 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.386 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.386 fio-3.35 00:10:18.386 Starting 4 threads 00:10:19.811 00:10:19.811 job0: (groupid=0, jobs=1): err= 0: pid=2866942: Fri Jul 26 13:51:47 2024 00:10:19.811 read: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1024msec) 00:10:19.811 slat (nsec): min=1072, max=26746k, avg=159432.84, stdev=1228235.29 00:10:19.811 clat (usec): min=4877, max=76421, avg=22150.84, stdev=18130.25 00:10:19.811 lat (usec): min=4881, max=76427, avg=22310.28, stdev=18227.86 00:10:19.811 clat percentiles (usec): 00:10:19.811 | 1.00th=[ 5342], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7242], 00:10:19.811 | 30.00th=[ 7767], 40.00th=[ 9241], 50.00th=[12387], 60.00th=[21890], 00:10:19.811 | 70.00th=[30016], 80.00th=[41157], 90.00th=[49021], 95.00th=[59507], 00:10:19.811 | 99.00th=[74974], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:10:19.811 | 99.99th=[76022] 00:10:19.811 write: IOPS=3402, BW=13.3MiB/s (13.9MB/s)(13.6MiB/1024msec); 0 zone resets 00:10:19.811 slat (nsec): min=1831, max=27548k, avg=144150.70, stdev=874554.59 00:10:19.811 clat (usec): min=5105, max=60302, avg=17429.43, stdev=11862.40 00:10:19.811 lat (usec): min=5110, max=60315, avg=17573.58, stdev=11937.64 00:10:19.811 clat percentiles (usec): 00:10:19.811 | 1.00th=[ 5932], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8979], 00:10:19.811 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[12387], 60.00th=[14222], 00:10:19.811 | 70.00th=[19268], 80.00th=[23987], 90.00th=[36439], 95.00th=[44827], 00:10:19.811 | 99.00th=[56361], 99.50th=[57934], 99.90th=[60031], 99.95th=[60556], 00:10:19.811 | 99.99th=[60556] 00:10:19.811 bw ( KiB/s): min= 7456, max=19392, per=27.13%, avg=13424.00, stdev=8440.03, samples=2 00:10:19.811 iops : min= 1864, max= 4848, avg=3356.00, stdev=2110.01, samples=2 00:10:19.811 lat (msec) : 10=37.83%, 20=27.41%, 50=29.07%, 100=5.69% 00:10:19.811 cpu : usr=1.17%, sys=2.35%, ctx=488, majf=0, minf=1 00:10:19.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:19.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.811 issued rwts: total=3072,3484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.811 job1: (groupid=0, jobs=1): err= 0: pid=2866943: Fri Jul 26 13:51:47 2024 00:10:19.811 read: IOPS=3996, BW=15.6MiB/s (16.4MB/s)(16.0MiB/1025msec) 00:10:19.811 slat (nsec): min=1409, max=8846.5k, avg=94564.22, stdev=537197.46 00:10:19.811 clat (usec): min=4712, max=27565, avg=11477.05, stdev=4039.03 00:10:19.811 lat (usec): min=4715, max=27569, avg=11571.61, stdev=4078.53 00:10:19.811 clat percentiles (usec): 00:10:19.811 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 8291], 00:10:19.811 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11469], 00:10:19.811 | 70.00th=[12518], 80.00th=[14353], 90.00th=[17171], 95.00th=[19792], 00:10:19.811 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26870], 99.95th=[27657], 00:10:19.811 | 99.99th=[27657] 00:10:19.811 write: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(17.0MiB/1025msec); 0 zone resets 00:10:19.811 slat (usec): min=2, max=21741, avg=138.04, stdev=625.50 00:10:19.811 clat (usec): min=1581, max=43657, avg=19096.81, stdev=7350.67 00:10:19.811 lat (usec): min=1592, max=43660, avg=19234.85, stdev=7379.19 00:10:19.811 clat percentiles (usec): 00:10:19.811 | 1.00th=[ 4555], 5.00th=[ 6390], 10.00th=[ 8094], 20.00th=[12256], 00:10:19.811 | 30.00th=[16581], 40.00th=[19006], 50.00th=[20579], 60.00th=[21627], 00:10:19.811 | 70.00th=[22676], 80.00th=[23725], 90.00th=[24773], 95.00th=[30802], 00:10:19.811 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:10:19.811 | 99.99th=[43779] 00:10:19.811 bw ( KiB/s): min=16784, max=16944, per=34.08%, avg=16864.00, stdev=113.14, samples=2 00:10:19.811 iops : min= 4196, max= 4236, avg=4216.00, stdev=28.28, samples=2 00:10:19.811 lat (msec) : 2=0.02%, 4=0.28%, 10=28.85%, 20=40.19%, 50=30.65% 00:10:19.811 cpu : usr=2.25%, sys=2.44%, ctx=822, majf=0, minf=1 00:10:19.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:19.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.811 issued rwts: total=4096,4344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.811 job2: (groupid=0, jobs=1): err= 0: pid=2866945: Fri Jul 26 13:51:47 2024 00:10:19.811 read: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1002msec) 00:10:19.811 slat (nsec): min=1058, max=23618k, avg=154874.18, stdev=1190023.27 00:10:19.811 clat (usec): min=1392, max=75499, avg=18583.40, stdev=16758.86 00:10:19.811 lat (usec): min=3915, max=75506, avg=18738.28, stdev=16858.31 00:10:19.811 clat percentiles (usec): 00:10:19.811 | 1.00th=[ 4146], 5.00th=[ 5735], 10.00th=[ 7242], 20.00th=[ 7963], 00:10:19.811 | 30.00th=[ 8717], 40.00th=[10028], 50.00th=[11600], 60.00th=[12125], 00:10:19.811 | 70.00th=[18744], 80.00th=[28181], 90.00th=[41681], 95.00th=[61604], 00:10:19.811 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:19.811 | 99.99th=[76022] 00:10:19.811 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:19.811 slat (nsec): min=1805, max=17159k, avg=173785.10, stdev=937077.22 00:10:19.811 clat (usec): min=3957, max=79283, avg=23257.28, stdev=15067.05 00:10:19.811 lat (usec): min=3959, max=79290, avg=23431.06, stdev=15134.81 00:10:19.811 clat percentiles (usec): 00:10:19.811 | 1.00th=[ 5276], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10421], 00:10:19.811 | 30.00th=[11994], 40.00th=[13829], 50.00th=[18482], 60.00th=[23725], 00:10:19.811 | 70.00th=[29492], 80.00th=[34866], 90.00th=[44827], 95.00th=[54264], 00:10:19.812 | 99.00th=[70779], 99.50th=[74974], 99.90th=[79168], 99.95th=[79168], 00:10:19.812 | 99.99th=[79168] 00:10:19.812 bw ( KiB/s): min= 8192, max=16384, per=24.83%, avg=12288.00, stdev=5792.62, samples=2 00:10:19.812 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:19.812 lat (msec) : 2=0.02%, 4=0.18%, 10=27.67%, 20=33.87%, 50=30.39% 00:10:19.812 lat (msec) : 100=7.88% 00:10:19.812 cpu : usr=1.50%, sys=1.80%, ctx=472, majf=0, minf=1 00:10:19.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:19.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.812 issued rwts: total=2934,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.812 job3: (groupid=0, jobs=1): err= 0: pid=2866946: Fri Jul 26 13:51:47 2024 00:10:19.812 read: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec) 00:10:19.812 slat (nsec): min=1660, max=13649k, avg=304260.26, stdev=1548328.93 00:10:19.812 clat (usec): min=15404, max=77501, avg=39509.55, stdev=14218.34 00:10:19.812 lat (usec): min=15407, max=77507, avg=39813.81, stdev=14313.45 00:10:19.812 clat percentiles (usec): 00:10:19.812 | 1.00th=[16581], 5.00th=[18482], 10.00th=[21627], 20.00th=[23462], 00:10:19.812 | 30.00th=[29230], 40.00th=[34866], 50.00th=[39584], 60.00th=[43779], 00:10:19.812 | 70.00th=[49546], 80.00th=[52691], 90.00th=[56886], 95.00th=[63177], 00:10:19.812 | 99.00th=[73925], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:10:19.812 | 99.99th=[77071] 00:10:19.812 write: IOPS=1756, BW=7026KiB/s (7195kB/s)(7216KiB/1027msec); 0 zone resets 00:10:19.812 slat (usec): min=2, max=12614, avg=293.02, stdev=1259.52 00:10:19.812 clat (usec): min=15544, max=82740, avg=37640.55, stdev=15201.20 00:10:19.812 lat (usec): min=15550, max=83709, avg=37933.57, stdev=15251.86 00:10:19.812 clat percentiles (usec): 00:10:19.812 | 1.00th=[16581], 5.00th=[17957], 10.00th=[19792], 20.00th=[23725], 00:10:19.812 | 30.00th=[27132], 40.00th=[30278], 50.00th=[33817], 60.00th=[40109], 00:10:19.812 | 70.00th=[45876], 80.00th=[50070], 90.00th=[60556], 95.00th=[66847], 00:10:19.812 | 99.00th=[76022], 99.50th=[76022], 99.90th=[82314], 99.95th=[82314], 00:10:19.812 | 99.99th=[82314] 00:10:19.812 bw ( KiB/s): min= 5216, max= 8192, per=13.55%, avg=6704.00, stdev=2104.35, samples=2 00:10:19.812 iops : min= 1304, max= 2048, avg=1676.00, stdev=526.09, samples=2 00:10:19.812 lat (msec) : 20=9.52%, 50=66.08%, 100=24.40% 00:10:19.812 cpu : usr=1.27%, sys=2.05%, ctx=303, majf=0, minf=1 00:10:19.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:10:19.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.812 issued rwts: total=1536,1804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.812 00:10:19.812 Run status group 0 (all jobs): 00:10:19.812 READ: bw=44.3MiB/s (46.4MB/s), 5982KiB/s-15.6MiB/s (6126kB/s-16.4MB/s), io=45.5MiB (47.7MB), run=1002-1027msec 00:10:19.812 WRITE: bw=48.3MiB/s (50.7MB/s), 7026KiB/s-16.6MiB/s (7195kB/s-17.4MB/s), io=49.6MiB (52.0MB), run=1002-1027msec 00:10:19.812 00:10:19.812 Disk stats (read/write): 00:10:19.812 nvme0n1: ios=2869/3072, merge=0/0, ticks=19714/17964, in_queue=37678, util=96.39% 00:10:19.812 nvme0n2: ios=3121/3547, merge=0/0, ticks=34623/63698, in_queue=98321, util=86.54% 00:10:19.812 nvme0n3: ios=1707/2048, merge=0/0, ticks=15239/21312, in_queue=36551, util=95.23% 00:10:19.812 nvme0n4: ios=1185/1536, merge=0/0, ticks=14767/18504, in_queue=33271, util=92.50% 00:10:19.812 13:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:19.812 13:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2867179 00:10:19.812 13:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:19.812 13:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:19.812 [global] 00:10:19.812 thread=1 00:10:19.812 invalidate=1 00:10:19.812 rw=read 00:10:19.812 time_based=1 00:10:19.812 runtime=10 00:10:19.812 ioengine=libaio 00:10:19.812 direct=1 00:10:19.812 bs=4096 00:10:19.812 iodepth=1 00:10:19.812 norandommap=1 00:10:19.812 numjobs=1 00:10:19.812 00:10:19.812 [job0] 00:10:19.812 filename=/dev/nvme0n1 00:10:19.812 [job1] 00:10:19.812 filename=/dev/nvme0n2 00:10:19.812 [job2] 00:10:19.812 filename=/dev/nvme0n3 00:10:19.812 [job3] 00:10:19.812 filename=/dev/nvme0n4 00:10:19.812 Could not set queue depth (nvme0n1) 00:10:19.812 Could not set queue depth (nvme0n2) 00:10:19.812 Could not set queue depth (nvme0n3) 00:10:19.812 Could not set queue depth (nvme0n4) 00:10:20.071 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.071 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.071 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.071 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.071 fio-3.35 00:10:20.071 Starting 4 threads 00:10:23.361 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:23.361 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=270336, buflen=4096 00:10:23.361 fio: pid=2867321, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:23.361 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:23.361 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=2375680, buflen=4096 00:10:23.361 fio: pid=2867320, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:23.361 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.361 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:23.361 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=19128320, buflen=4096 00:10:23.361 fio: pid=2867318, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:23.361 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.361 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:23.621 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5496832, buflen=4096 00:10:23.621 fio: pid=2867319, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:23.621 00:10:23.621 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2867318: Fri Jul 26 13:51:50 2024 00:10:23.621 read: IOPS=1509, BW=6036KiB/s (6180kB/s)(18.2MiB/3095msec) 00:10:23.621 slat (usec): min=5, max=14025, avg=16.47, stdev=306.77 00:10:23.621 clat (usec): min=415, max=42850, avg=644.00, stdev=1675.18 00:10:23.621 lat (usec): min=422, max=42872, avg=657.82, stdev=1694.20 00:10:23.621 clat percentiles (usec): 00:10:23.621 | 1.00th=[ 433], 5.00th=[ 478], 10.00th=[ 490], 20.00th=[ 502], 00:10:23.621 | 30.00th=[ 506], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 529], 00:10:23.621 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 783], 95.00th=[ 955], 00:10:23.621 | 99.00th=[ 1139], 99.50th=[ 1287], 99.90th=[42206], 99.95th=[42206], 00:10:23.621 | 99.99th=[42730] 00:10:23.621 bw ( KiB/s): min= 6472, max= 6976, per=82.88%, avg=6723.20, stdev=188.77, samples=5 00:10:23.621 iops : min= 1618, max= 1744, avg=1680.80, stdev=47.19, samples=5 00:10:23.621 lat (usec) : 500=20.34%, 750=68.49%, 1000=7.00% 00:10:23.621 lat (msec) : 2=3.92%, 4=0.06%, 50=0.17% 00:10:23.621 cpu : usr=0.87%, sys=2.55%, ctx=4676, majf=0, minf=1 00:10:23.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.621 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.621 issued rwts: total=4671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.621 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2867319: Fri Jul 26 13:51:50 2024 00:10:23.621 read: IOPS=409, BW=1635KiB/s (1674kB/s)(5368KiB/3283msec) 00:10:23.621 slat (usec): min=5, max=14754, avg=62.62, stdev=822.68 00:10:23.621 clat (usec): min=366, max=43036, avg=2380.82, stdev=8417.78 00:10:23.621 lat (usec): min=373, max=43058, avg=2443.48, stdev=8450.90 00:10:23.621 clat percentiles (usec): 00:10:23.621 | 1.00th=[ 396], 5.00th=[ 469], 10.00th=[ 486], 20.00th=[ 498], 00:10:23.622 | 30.00th=[ 506], 40.00th=[ 519], 50.00th=[ 545], 60.00th=[ 562], 00:10:23.622 | 70.00th=[ 627], 80.00th=[ 717], 90.00th=[ 824], 95.00th=[ 1303], 00:10:23.622 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:23.622 | 99.99th=[43254] 00:10:23.622 bw ( KiB/s): min= 96, max= 6726, per=16.00%, avg=1298.33, stdev=2668.24, samples=6 00:10:23.622 iops : min= 24, max= 1681, avg=324.50, stdev=666.86, samples=6 00:10:23.622 lat (usec) : 500=23.31%, 750=59.20%, 1000=11.24% 00:10:23.622 lat (msec) : 2=1.64%, 4=0.22%, 50=4.32% 00:10:23.622 cpu : usr=0.15%, sys=0.40%, ctx=1349, majf=0, minf=1 00:10:23.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.622 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.622 issued rwts: total=1343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.622 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2867320: Fri Jul 26 13:51:50 2024 00:10:23.622 read: IOPS=201, BW=806KiB/s (825kB/s)(2320KiB/2878msec) 00:10:23.622 slat (usec): min=6, max=21775, avg=65.25, stdev=1003.75 00:10:23.622 clat (usec): min=414, max=43019, avg=4893.69, stdev=12412.22 00:10:23.622 lat (usec): min=422, max=43041, avg=4959.04, stdev=12441.17 00:10:23.622 clat percentiles (usec): 00:10:23.622 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:10:23.622 | 30.00th=[ 537], 40.00th=[ 644], 50.00th=[ 734], 60.00th=[ 816], 00:10:23.622 | 70.00th=[ 971], 80.00th=[ 1237], 90.00th=[12780], 95.00th=[42206], 00:10:23.622 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:23.622 | 99.99th=[43254] 00:10:23.622 bw ( KiB/s): min= 96, max= 352, per=2.21%, avg=179.20, stdev=113.79, samples=5 00:10:23.622 iops : min= 24, max= 88, avg=44.80, stdev=28.45, samples=5 00:10:23.622 lat (usec) : 500=27.02%, 750=24.78%, 1000=20.83% 00:10:23.622 lat (msec) : 2=17.04%, 20=0.17%, 50=9.98% 00:10:23.622 cpu : usr=0.07%, sys=0.24%, ctx=583, majf=0, minf=1 00:10:23.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.622 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.622 issued rwts: total=581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.622 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2867321: Fri Jul 26 13:51:50 2024 00:10:23.622 read: IOPS=24, BW=98.1KiB/s (100kB/s)(264KiB/2692msec) 00:10:23.622 slat (nsec): min=9344, max=29177, avg=17813.48, stdev=5378.80 00:10:23.622 clat (usec): min=1272, max=42211, avg=40743.28, stdev=7023.08 00:10:23.622 lat (usec): min=1293, max=42224, avg=40760.99, stdev=7021.76 00:10:23.622 clat percentiles (usec): 00:10:23.622 | 1.00th=[ 1270], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:23.622 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:23.622 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:23.622 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:23.622 | 99.99th=[42206] 00:10:23.622 bw ( KiB/s): min= 96, max= 104, per=1.20%, avg=97.60, stdev= 3.58, samples=5 00:10:23.622 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:23.622 lat (msec) : 2=2.99%, 50=95.52% 00:10:23.622 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:10:23.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.622 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.622 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.622 00:10:23.622 Run status group 0 (all jobs): 00:10:23.622 READ: bw=8112KiB/s (8307kB/s), 98.1KiB/s-6036KiB/s (100kB/s-6180kB/s), io=26.0MiB (27.3MB), run=2692-3283msec 00:10:23.622 00:10:23.622 Disk stats (read/write): 00:10:23.622 nvme0n1: ios=4664/0, merge=0/0, ticks=2699/0, in_queue=2699, util=94.69% 00:10:23.622 nvme0n2: ios=1147/0, merge=0/0, ticks=3055/0, in_queue=3055, util=95.11% 00:10:23.622 nvme0n3: ios=552/0, merge=0/0, ticks=2816/0, in_queue=2816, util=95.51% 00:10:23.622 nvme0n4: ios=111/0, merge=0/0, ticks=3662/0, in_queue=3662, util=99.93% 00:10:23.622 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.622 13:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:23.882 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.882 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:23.882 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.882 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:24.143 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.143 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:24.403 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.403 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:24.403 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:24.403 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2867179 00:10:24.403 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:24.403 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:24.662 nvmf hotplug test: fio failed as expected 00:10:24.662 13:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.923 rmmod nvme_tcp 00:10:24.923 rmmod nvme_fabrics 00:10:24.923 rmmod nvme_keyring 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2864247 ']' 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2864247 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2864247 ']' 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2864247 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2864247 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2864247' 00:10:24.923 killing process with pid 2864247 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2864247 00:10:24.923 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2864247 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.183 13:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.093 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.093 00:10:27.093 real 0m26.565s 00:10:27.093 user 1m46.826s 00:10:27.093 sys 0m7.336s 00:10:27.093 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.093 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.093 ************************************ 00:10:27.093 END TEST nvmf_fio_target 00:10:27.093 ************************************ 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.353 ************************************ 00:10:27.353 START TEST nvmf_bdevio 00:10:27.353 ************************************ 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:27.353 * Looking for test storage... 00:10:27.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.353 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.354 13:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:32.637 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:32.637 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:32.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:32.638 Found net devices under 0000:86:00.0: cvl_0_0 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:32.638 Found net devices under 0000:86:00.1: cvl_0_1 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.638 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.638 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.638 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:10:32.899 00:10:32.899 --- 10.0.0.2 ping statistics --- 00:10:32.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.899 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:10:32.899 00:10:32.899 --- 10.0.0.1 ping statistics --- 00:10:32.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.899 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2871559 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2871559 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2871559 ']' 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.899 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.899 [2024-07-26 13:52:00.203508] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:10:32.899 [2024-07-26 13:52:00.203548] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.899 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.899 [2024-07-26 13:52:00.263856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.159 [2024-07-26 13:52:00.349935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.159 [2024-07-26 13:52:00.349970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.159 [2024-07-26 13:52:00.349978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.159 [2024-07-26 13:52:00.349986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.159 [2024-07-26 13:52:00.349991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.159 [2024-07-26 13:52:00.350102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.159 [2024-07-26 13:52:00.350132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:33.159 [2024-07-26 13:52:00.350242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.159 [2024-07-26 13:52:00.350243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 [2024-07-26 13:52:01.062460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 Malloc0 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.730 [2024-07-26 13:52:01.114081] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.730 { 00:10:33.730 "params": { 00:10:33.730 "name": "Nvme$subsystem", 00:10:33.730 "trtype": "$TEST_TRANSPORT", 00:10:33.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.730 "adrfam": "ipv4", 00:10:33.730 "trsvcid": "$NVMF_PORT", 00:10:33.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.730 "hdgst": ${hdgst:-false}, 00:10:33.730 "ddgst": ${ddgst:-false} 00:10:33.730 }, 00:10:33.730 "method": "bdev_nvme_attach_controller" 00:10:33.730 } 00:10:33.730 EOF 00:10:33.730 )") 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:33.730 13:52:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.730 "params": { 00:10:33.730 "name": "Nvme1", 00:10:33.730 "trtype": "tcp", 00:10:33.730 "traddr": "10.0.0.2", 00:10:33.730 "adrfam": "ipv4", 00:10:33.730 "trsvcid": "4420", 00:10:33.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.730 "hdgst": false, 00:10:33.730 "ddgst": false 00:10:33.730 }, 00:10:33.731 "method": "bdev_nvme_attach_controller" 00:10:33.731 }' 00:10:33.731 [2024-07-26 13:52:01.163111] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:10:33.731 [2024-07-26 13:52:01.163151] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871805 ] 00:10:33.991 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.991 [2024-07-26 13:52:01.217683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.991 [2024-07-26 13:52:01.294059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.991 [2024-07-26 13:52:01.294119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.991 [2024-07-26 13:52:01.294116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.250 I/O targets: 00:10:34.250 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:34.250 00:10:34.250 00:10:34.250 CUnit - A unit testing framework for C - Version 2.1-3 00:10:34.250 http://cunit.sourceforge.net/ 00:10:34.250 00:10:34.250 00:10:34.250 Suite: bdevio tests on: Nvme1n1 00:10:34.250 Test: blockdev write read block ...passed 00:10:34.250 Test: blockdev write zeroes read block ...passed 00:10:34.250 Test: blockdev write zeroes read no split ...passed 00:10:34.509 Test: blockdev write zeroes read split ...passed 00:10:34.509 Test: blockdev write zeroes read split partial ...passed 00:10:34.509 Test: blockdev reset ...[2024-07-26 13:52:01.776907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:34.509 [2024-07-26 13:52:01.776970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7036d0 (9): Bad file descriptor 00:10:34.509 [2024-07-26 13:52:01.832880] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:34.509 passed 00:10:34.509 Test: blockdev write read 8 blocks ...passed 00:10:34.509 Test: blockdev write read size > 128k ...passed 00:10:34.509 Test: blockdev write read invalid size ...passed 00:10:34.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:34.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:34.509 Test: blockdev write read max offset ...passed 00:10:34.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:34.770 Test: blockdev writev readv 8 blocks ...passed 00:10:34.770 Test: blockdev writev readv 30 x 1block ...passed 00:10:34.770 Test: blockdev writev readv block ...passed 00:10:34.770 Test: blockdev writev readv size > 128k ...passed 00:10:34.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:34.770 Test: blockdev comparev and writev ...[2024-07-26 13:52:02.073855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.073884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.073899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.074473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.074485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.074497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.074505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.074990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.075001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.075009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.075476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.075488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.075499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.770 [2024-07-26 13:52:02.075507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:34.770 passed 00:10:34.770 Test: blockdev nvme passthru rw ...passed 00:10:34.770 Test: blockdev nvme passthru vendor specific ...[2024-07-26 13:52:02.160120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.770 [2024-07-26 13:52:02.160135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.160629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.770 [2024-07-26 13:52:02.160640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.161049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.770 [2024-07-26 13:52:02.161060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:34.770 [2024-07-26 13:52:02.161470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.770 [2024-07-26 13:52:02.161482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:34.770 passed 00:10:34.770 Test: blockdev nvme admin passthru ...passed 00:10:35.030 Test: blockdev copy ...passed 00:10:35.030 00:10:35.030 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.030 suites 1 1 n/a 0 0 00:10:35.030 tests 23 23 23 0 0 00:10:35.030 asserts 152 152 152 0 n/a 00:10:35.030 00:10:35.030 Elapsed time = 1.370 seconds 00:10:35.030 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.031 rmmod nvme_tcp 00:10:35.031 rmmod nvme_fabrics 00:10:35.031 rmmod nvme_keyring 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2871559 ']' 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2871559 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2871559 ']' 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2871559 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.031 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2871559 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2871559' 00:10:35.291 killing process with pid 2871559 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2871559 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2871559 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.291 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.835 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:37.835 00:10:37.835 real 0m10.177s 00:10:37.835 user 0m13.422s 00:10:37.835 sys 0m4.555s 00:10:37.835 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.835 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.835 ************************************ 00:10:37.835 END TEST nvmf_bdevio 00:10:37.835 ************************************ 00:10:37.835 13:52:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:37.836 00:10:37.836 real 4m33.039s 00:10:37.836 user 10m32.476s 00:10:37.836 sys 1m29.334s 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 ************************************ 00:10:37.836 END TEST nvmf_target_core 00:10:37.836 ************************************ 00:10:37.836 13:52:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:37.836 13:52:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.836 13:52:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.836 13:52:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 ************************************ 00:10:37.836 START TEST nvmf_target_extra 00:10:37.836 ************************************ 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:37.836 * Looking for test storage... 00:10:37.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 ************************************ 00:10:37.836 START TEST nvmf_example 00:10:37.836 ************************************ 00:10:37.836 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:37.836 * Looking for test storage... 00:10:37.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.836 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.837 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.194 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.194 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.194 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.194 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.194 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.194 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:10:43.195 00:10:43.195 --- 10.0.0.2 ping statistics --- 00:10:43.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.195 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:10:43.195 00:10:43.195 --- 10.0.0.1 ping statistics --- 00:10:43.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.195 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2875600 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2875600 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2875600 ']' 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.195 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.195 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:43.766 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:43.766 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.987 Initializing NVMe Controllers 00:10:55.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:55.988 Initialization complete. Launching workers. 00:10:55.988 ======================================================== 00:10:55.988 Latency(us) 00:10:55.988 Device Information : IOPS MiB/s Average min max 00:10:55.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13669.42 53.40 4681.85 717.20 16041.64 00:10:55.988 ======================================================== 00:10:55.988 Total : 13669.42 53.40 4681.85 717.20 16041.64 00:10:55.988 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.988 rmmod nvme_tcp 00:10:55.988 rmmod nvme_fabrics 00:10:55.988 rmmod nvme_keyring 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2875600 ']' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2875600 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2875600 ']' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2875600 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2875600 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2875600' 00:10:55.988 killing process with pid 2875600 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2875600 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2875600 00:10:55.988 nvmf threads initialize successfully 00:10:55.988 bdev subsystem init successfully 00:10:55.988 created a nvmf target service 00:10:55.988 create targets's poll groups done 00:10:55.988 all subsystems of target started 00:10:55.988 nvmf target is running 00:10:55.988 all subsystems of target stopped 00:10:55.988 destroy targets's poll groups done 00:10:55.988 destroyed the nvmf target service 00:10:55.988 bdev subsystem finish successfully 00:10:55.988 nvmf threads destroy successfully 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.988 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.561 00:10:56.561 real 0m18.744s 00:10:56.561 user 0m45.709s 00:10:56.561 sys 0m5.082s 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.561 ************************************ 00:10:56.561 END TEST nvmf_example 00:10:56.561 ************************************ 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.561 ************************************ 00:10:56.561 START TEST nvmf_filesystem 00:10:56.561 ************************************ 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:56.561 * Looking for test storage... 00:10:56.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:56.561 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:56.562 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:56.563 #define SPDK_CONFIG_H 00:10:56.563 #define SPDK_CONFIG_APPS 1 00:10:56.563 #define SPDK_CONFIG_ARCH native 00:10:56.563 #undef SPDK_CONFIG_ASAN 00:10:56.563 #undef SPDK_CONFIG_AVAHI 00:10:56.563 #undef SPDK_CONFIG_CET 00:10:56.563 #define SPDK_CONFIG_COVERAGE 1 00:10:56.563 #define SPDK_CONFIG_CROSS_PREFIX 00:10:56.563 #undef SPDK_CONFIG_CRYPTO 00:10:56.563 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:56.563 #undef SPDK_CONFIG_CUSTOMOCF 00:10:56.563 #undef SPDK_CONFIG_DAOS 00:10:56.563 #define SPDK_CONFIG_DAOS_DIR 00:10:56.563 #define SPDK_CONFIG_DEBUG 1 00:10:56.563 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:56.563 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:56.563 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:56.563 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:56.563 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:56.563 #undef SPDK_CONFIG_DPDK_UADK 00:10:56.563 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:56.563 #define SPDK_CONFIG_EXAMPLES 1 00:10:56.563 #undef SPDK_CONFIG_FC 00:10:56.563 #define SPDK_CONFIG_FC_PATH 00:10:56.563 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:56.563 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:56.563 #undef SPDK_CONFIG_FUSE 00:10:56.563 #undef SPDK_CONFIG_FUZZER 00:10:56.563 #define SPDK_CONFIG_FUZZER_LIB 00:10:56.563 #undef SPDK_CONFIG_GOLANG 00:10:56.563 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:56.563 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:56.563 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:56.563 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:56.563 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:56.563 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:56.563 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:56.563 #define SPDK_CONFIG_IDXD 1 00:10:56.563 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:56.563 #undef SPDK_CONFIG_IPSEC_MB 00:10:56.563 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:56.563 #define SPDK_CONFIG_ISAL 1 00:10:56.563 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:56.563 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:56.563 #define SPDK_CONFIG_LIBDIR 00:10:56.563 #undef SPDK_CONFIG_LTO 00:10:56.563 #define SPDK_CONFIG_MAX_LCORES 128 00:10:56.563 #define SPDK_CONFIG_NVME_CUSE 1 00:10:56.563 #undef SPDK_CONFIG_OCF 00:10:56.563 #define SPDK_CONFIG_OCF_PATH 00:10:56.563 #define SPDK_CONFIG_OPENSSL_PATH 00:10:56.563 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:56.563 #define SPDK_CONFIG_PGO_DIR 00:10:56.563 #undef SPDK_CONFIG_PGO_USE 00:10:56.563 #define SPDK_CONFIG_PREFIX /usr/local 00:10:56.563 #undef SPDK_CONFIG_RAID5F 00:10:56.563 #undef SPDK_CONFIG_RBD 00:10:56.563 #define SPDK_CONFIG_RDMA 1 00:10:56.563 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:56.563 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:56.563 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:56.563 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:56.563 #define SPDK_CONFIG_SHARED 1 00:10:56.563 #undef SPDK_CONFIG_SMA 00:10:56.563 #define SPDK_CONFIG_TESTS 1 00:10:56.563 #undef SPDK_CONFIG_TSAN 00:10:56.563 #define SPDK_CONFIG_UBLK 1 00:10:56.563 #define SPDK_CONFIG_UBSAN 1 00:10:56.563 #undef SPDK_CONFIG_UNIT_TESTS 00:10:56.563 #undef SPDK_CONFIG_URING 00:10:56.563 #define SPDK_CONFIG_URING_PATH 00:10:56.563 #undef SPDK_CONFIG_URING_ZNS 00:10:56.563 #undef SPDK_CONFIG_USDT 00:10:56.563 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:56.563 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:56.563 #define SPDK_CONFIG_VFIO_USER 1 00:10:56.563 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:56.563 #define SPDK_CONFIG_VHOST 1 00:10:56.563 #define SPDK_CONFIG_VIRTIO 1 00:10:56.563 #undef SPDK_CONFIG_VTUNE 00:10:56.563 #define SPDK_CONFIG_VTUNE_DIR 00:10:56.563 #define SPDK_CONFIG_WERROR 1 00:10:56.563 #define SPDK_CONFIG_WPDK_DIR 00:10:56.563 #undef SPDK_CONFIG_XNVME 00:10:56.563 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:56.563 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:56.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2877916 ]] 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2877916 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:56.565 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.GcFRXQ 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GcFRXQ/tests/target /tmp/spdk.GcFRXQ 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=950202368 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4334227456 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=185145303040 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974283264 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10828980224 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97924960256 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987141632 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171829760 00:10:56.566 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194857472 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23027712 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97984065536 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987141632 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=3076096 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597422592 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597426688 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:56.567 * Looking for test storage... 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.567 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:56.828 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=185145303040 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13043572736 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.829 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.829 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.830 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.830 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.114 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.115 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:02.115 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:02.115 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:02.115 Found net devices under 0000:86:00.0: cvl_0_0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:02.115 Found net devices under 0000:86:00.1: cvl_0_1 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:02.115 00:11:02.115 --- 10.0.0.2 ping statistics --- 00:11:02.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.115 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:11:02.115 00:11:02.115 --- 10.0.0.1 ping statistics --- 00:11:02.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.115 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.115 ************************************ 00:11:02.115 START TEST nvmf_filesystem_no_in_capsule 00:11:02.115 ************************************ 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.115 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2880848 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2880848 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2880848 ']' 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.116 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.116 [2024-07-26 13:52:29.364985] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:11:02.116 [2024-07-26 13:52:29.365025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.116 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.116 [2024-07-26 13:52:29.423162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.116 [2024-07-26 13:52:29.504048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.116 [2024-07-26 13:52:29.504084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.116 [2024-07-26 13:52:29.504091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.116 [2024-07-26 13:52:29.504097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.116 [2024-07-26 13:52:29.504103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.116 [2024-07-26 13:52:29.504142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.116 [2024-07-26 13:52:29.504237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.116 [2024-07-26 13:52:29.504341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.116 [2024-07-26 13:52:29.504342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.055 [2024-07-26 13:52:30.213346] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.055 Malloc1 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.055 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 [2024-07-26 13:52:30.357933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:03.056 { 00:11:03.056 "name": "Malloc1", 00:11:03.056 "aliases": [ 00:11:03.056 "d48543fc-a886-46d7-962b-a4802931f947" 00:11:03.056 ], 00:11:03.056 "product_name": "Malloc disk", 00:11:03.056 "block_size": 512, 00:11:03.056 "num_blocks": 1048576, 00:11:03.056 "uuid": "d48543fc-a886-46d7-962b-a4802931f947", 00:11:03.056 "assigned_rate_limits": { 00:11:03.056 "rw_ios_per_sec": 0, 00:11:03.056 "rw_mbytes_per_sec": 0, 00:11:03.056 "r_mbytes_per_sec": 0, 00:11:03.056 "w_mbytes_per_sec": 0 00:11:03.056 }, 00:11:03.056 "claimed": true, 00:11:03.056 "claim_type": "exclusive_write", 00:11:03.056 "zoned": false, 00:11:03.056 "supported_io_types": { 00:11:03.056 "read": true, 00:11:03.056 "write": true, 00:11:03.056 "unmap": true, 00:11:03.056 "flush": true, 00:11:03.056 "reset": true, 00:11:03.056 "nvme_admin": false, 00:11:03.056 "nvme_io": false, 00:11:03.056 "nvme_io_md": false, 00:11:03.056 "write_zeroes": true, 00:11:03.056 "zcopy": true, 00:11:03.056 "get_zone_info": false, 00:11:03.056 "zone_management": false, 00:11:03.056 "zone_append": false, 00:11:03.056 "compare": false, 00:11:03.056 "compare_and_write": false, 00:11:03.056 "abort": true, 00:11:03.056 "seek_hole": false, 00:11:03.056 "seek_data": false, 00:11:03.056 "copy": true, 00:11:03.056 "nvme_iov_md": false 00:11:03.056 }, 00:11:03.056 "memory_domains": [ 00:11:03.056 { 00:11:03.056 "dma_device_id": "system", 00:11:03.056 "dma_device_type": 1 00:11:03.056 }, 00:11:03.056 { 00:11:03.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.056 "dma_device_type": 2 00:11:03.056 } 00:11:03.056 ], 00:11:03.056 "driver_specific": {} 00:11:03.056 } 00:11:03.056 ]' 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:03.056 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.436 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.436 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.436 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.436 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:04.436 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:06.343 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:06.343 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:06.343 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:06.344 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.912 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:07.171 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.551 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.552 ************************************ 00:11:08.552 START TEST filesystem_ext4 00:11:08.552 ************************************ 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.552 mke2fs 1.46.5 (30-Dec-2021) 00:11:08.552 Discarding device blocks: 0/522240 done 00:11:08.552 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.552 Filesystem UUID: baf70ece-e6e4-4280-b822-be649be73d65 00:11:08.552 Superblock backups stored on blocks: 00:11:08.552 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.552 00:11:08.552 Allocating group tables: 0/64 done 00:11:08.552 Writing inode tables: 0/64 done 00:11:08.552 Creating journal (8192 blocks): done 00:11:08.552 Writing superblocks and filesystem accounting information: 0/64 done 00:11:08.552 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.552 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2880848 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.814 00:11:08.814 real 0m0.480s 00:11:08.814 user 0m0.024s 00:11:08.814 sys 0m0.044s 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:08.814 ************************************ 00:11:08.814 END TEST filesystem_ext4 00:11:08.814 ************************************ 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.814 ************************************ 00:11:08.814 START TEST filesystem_btrfs 00:11:08.814 ************************************ 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:08.814 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.143 btrfs-progs v6.6.2 00:11:09.143 See https://btrfs.readthedocs.io for more information. 00:11:09.143 00:11:09.143 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.143 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.143 this does not affect your deployments: 00:11:09.143 - DUP for metadata (-m dup) 00:11:09.143 - enabled no-holes (-O no-holes) 00:11:09.143 - enabled free-space-tree (-R free-space-tree) 00:11:09.143 00:11:09.143 Label: (null) 00:11:09.143 UUID: 095ce376-c9bd-4c5b-b892-3dc520d10c15 00:11:09.143 Node size: 16384 00:11:09.143 Sector size: 4096 00:11:09.143 Filesystem size: 510.00MiB 00:11:09.143 Block group profiles: 00:11:09.143 Data: single 8.00MiB 00:11:09.143 Metadata: DUP 32.00MiB 00:11:09.143 System: DUP 8.00MiB 00:11:09.143 SSD detected: yes 00:11:09.143 Zoned device: no 00:11:09.143 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.143 Runtime features: free-space-tree 00:11:09.143 Checksum: crc32c 00:11:09.143 Number of devices: 1 00:11:09.143 Devices: 00:11:09.143 ID SIZE PATH 00:11:09.143 1 510.00MiB /dev/nvme0n1p1 00:11:09.143 00:11:09.143 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:09.143 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2880848 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.083 00:11:10.083 real 0m1.232s 00:11:10.083 user 0m0.028s 00:11:10.083 sys 0m0.053s 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 ************************************ 00:11:10.083 END TEST filesystem_btrfs 00:11:10.083 ************************************ 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 ************************************ 00:11:10.083 START TEST filesystem_xfs 00:11:10.083 ************************************ 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:10.083 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.344 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.344 = sectsz=512 attr=2, projid32bit=1 00:11:10.344 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.344 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.344 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.344 = sunit=0 swidth=0 blks 00:11:10.344 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.344 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.344 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.344 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.284 Discarding blocks...Done. 00:11:11.284 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:11.284 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.826 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.826 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.826 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.826 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.826 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.826 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.826 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2880848 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.827 00:11:13.827 real 0m3.590s 00:11:13.827 user 0m0.021s 00:11:13.827 sys 0m0.053s 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.827 ************************************ 00:11:13.827 END TEST filesystem_xfs 00:11:13.827 ************************************ 00:11:13.827 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2880848 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2880848 ']' 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2880848 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.090 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2880848 00:11:14.350 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.350 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.350 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2880848' 00:11:14.350 killing process with pid 2880848 00:11:14.350 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2880848 00:11:14.350 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2880848 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:14.611 00:11:14.611 real 0m12.587s 00:11:14.611 user 0m49.422s 00:11:14.611 sys 0m1.105s 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 ************************************ 00:11:14.611 END TEST nvmf_filesystem_no_in_capsule 00:11:14.611 ************************************ 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 ************************************ 00:11:14.611 START TEST nvmf_filesystem_in_capsule 00:11:14.611 ************************************ 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2883204 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2883204 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2883204 ']' 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.611 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 [2024-07-26 13:52:42.030205] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:11:14.611 [2024-07-26 13:52:42.030252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.872 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.872 [2024-07-26 13:52:42.090319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.872 [2024-07-26 13:52:42.169781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.872 [2024-07-26 13:52:42.169828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.872 [2024-07-26 13:52:42.169835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.872 [2024-07-26 13:52:42.169841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.872 [2024-07-26 13:52:42.169846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.872 [2024-07-26 13:52:42.169913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.872 [2024-07-26 13:52:42.170010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.872 [2024-07-26 13:52:42.170101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.872 [2024-07-26 13:52:42.170103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.442 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.442 [2024-07-26 13:52:42.878383] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.701 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.701 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:15.702 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.702 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.702 Malloc1 00:11:15.702 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.702 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.702 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.702 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.702 [2024-07-26 13:52:43.022827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:15.702 { 00:11:15.702 "name": "Malloc1", 00:11:15.702 "aliases": [ 00:11:15.702 "e92b8dcb-e003-4edd-b618-0c9544a4b15b" 00:11:15.702 ], 00:11:15.702 "product_name": "Malloc disk", 00:11:15.702 "block_size": 512, 00:11:15.702 "num_blocks": 1048576, 00:11:15.702 "uuid": "e92b8dcb-e003-4edd-b618-0c9544a4b15b", 00:11:15.702 "assigned_rate_limits": { 00:11:15.702 "rw_ios_per_sec": 0, 00:11:15.702 "rw_mbytes_per_sec": 0, 00:11:15.702 "r_mbytes_per_sec": 0, 00:11:15.702 "w_mbytes_per_sec": 0 00:11:15.702 }, 00:11:15.702 "claimed": true, 00:11:15.702 "claim_type": "exclusive_write", 00:11:15.702 "zoned": false, 00:11:15.702 "supported_io_types": { 00:11:15.702 "read": true, 00:11:15.702 "write": true, 00:11:15.702 "unmap": true, 00:11:15.702 "flush": true, 00:11:15.702 "reset": true, 00:11:15.702 "nvme_admin": false, 00:11:15.702 "nvme_io": false, 00:11:15.702 "nvme_io_md": false, 00:11:15.702 "write_zeroes": true, 00:11:15.702 "zcopy": true, 00:11:15.702 "get_zone_info": false, 00:11:15.702 "zone_management": false, 00:11:15.702 "zone_append": false, 00:11:15.702 "compare": false, 00:11:15.702 "compare_and_write": false, 00:11:15.702 "abort": true, 00:11:15.702 "seek_hole": false, 00:11:15.702 "seek_data": false, 00:11:15.702 "copy": true, 00:11:15.702 "nvme_iov_md": false 00:11:15.702 }, 00:11:15.702 "memory_domains": [ 00:11:15.702 { 00:11:15.702 "dma_device_id": "system", 00:11:15.702 "dma_device_type": 1 00:11:15.702 }, 00:11:15.702 { 00:11:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.702 "dma_device_type": 2 00:11:15.702 } 00:11:15.702 ], 00:11:15.702 "driver_specific": {} 00:11:15.702 } 00:11:15.702 ]' 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:15.702 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:15.962 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:15.962 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:15.962 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:15.962 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.962 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.901 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.901 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.902 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.902 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:16.902 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:18.812 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.072 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.332 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:19.332 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.713 ************************************ 00:11:20.713 START TEST filesystem_in_capsule_ext4 00:11:20.713 ************************************ 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:20.713 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.713 mke2fs 1.46.5 (30-Dec-2021) 00:11:20.713 Discarding device blocks: 0/522240 done 00:11:20.713 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:20.713 Filesystem UUID: d6b6b946-4e48-4d19-9420-ee8ea7dce16e 00:11:20.713 Superblock backups stored on blocks: 00:11:20.713 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:20.713 00:11:20.713 Allocating group tables: 0/64 done 00:11:20.713 Writing inode tables: 0/64 done 00:11:20.713 Creating journal (8192 blocks): done 00:11:21.911 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:11:21.911 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.911 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2883204 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.171 00:11:22.171 real 0m1.594s 00:11:22.171 user 0m0.022s 00:11:22.171 sys 0m0.046s 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:22.171 ************************************ 00:11:22.171 END TEST filesystem_in_capsule_ext4 00:11:22.171 ************************************ 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.171 ************************************ 00:11:22.171 START TEST filesystem_in_capsule_btrfs 00:11:22.171 ************************************ 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:22.171 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:22.432 btrfs-progs v6.6.2 00:11:22.432 See https://btrfs.readthedocs.io for more information. 00:11:22.432 00:11:22.432 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:22.432 NOTE: several default settings have changed in version 5.15, please make sure 00:11:22.432 this does not affect your deployments: 00:11:22.432 - DUP for metadata (-m dup) 00:11:22.432 - enabled no-holes (-O no-holes) 00:11:22.432 - enabled free-space-tree (-R free-space-tree) 00:11:22.432 00:11:22.432 Label: (null) 00:11:22.432 UUID: f7725a3a-39d2-41cc-9246-e9ee23d667a0 00:11:22.432 Node size: 16384 00:11:22.432 Sector size: 4096 00:11:22.432 Filesystem size: 510.00MiB 00:11:22.432 Block group profiles: 00:11:22.432 Data: single 8.00MiB 00:11:22.432 Metadata: DUP 32.00MiB 00:11:22.432 System: DUP 8.00MiB 00:11:22.432 SSD detected: yes 00:11:22.432 Zoned device: no 00:11:22.432 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:22.432 Runtime features: free-space-tree 00:11:22.432 Checksum: crc32c 00:11:22.432 Number of devices: 1 00:11:22.432 Devices: 00:11:22.432 ID SIZE PATH 00:11:22.432 1 510.00MiB /dev/nvme0n1p1 00:11:22.432 00:11:22.432 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:22.432 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2883204 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.372 00:11:23.372 real 0m1.100s 00:11:23.372 user 0m0.018s 00:11:23.372 sys 0m0.060s 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:23.372 ************************************ 00:11:23.372 END TEST filesystem_in_capsule_btrfs 00:11:23.372 ************************************ 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.372 ************************************ 00:11:23.372 START TEST filesystem_in_capsule_xfs 00:11:23.372 ************************************ 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:23.372 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:23.372 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:23.372 = sectsz=512 attr=2, projid32bit=1 00:11:23.372 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:23.372 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:23.372 data = bsize=4096 blocks=130560, imaxpct=25 00:11:23.372 = sunit=0 swidth=0 blks 00:11:23.372 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:23.372 log =internal log bsize=4096 blocks=16384, version=2 00:11:23.372 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:23.372 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:24.312 Discarding blocks...Done. 00:11:24.312 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:24.312 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2883204 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.223 00:11:26.223 real 0m2.975s 00:11:26.223 user 0m0.019s 00:11:26.223 sys 0m0.053s 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:26.223 ************************************ 00:11:26.223 END TEST filesystem_in_capsule_xfs 00:11:26.223 ************************************ 00:11:26.223 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2883204 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2883204 ']' 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2883204 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2883204 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2883204' 00:11:26.485 killing process with pid 2883204 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2883204 00:11:26.485 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2883204 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:27.095 00:11:27.095 real 0m12.259s 00:11:27.095 user 0m48.065s 00:11:27.095 sys 0m1.119s 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.095 ************************************ 00:11:27.095 END TEST nvmf_filesystem_in_capsule 00:11:27.095 ************************************ 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.095 rmmod nvme_tcp 00:11:27.095 rmmod nvme_fabrics 00:11:27.095 rmmod nvme_keyring 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.095 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.004 00:11:29.004 real 0m32.583s 00:11:29.004 user 1m39.117s 00:11:29.004 sys 0m6.342s 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 ************************************ 00:11:29.004 END TEST nvmf_filesystem 00:11:29.004 ************************************ 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.004 13:52:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.264 ************************************ 00:11:29.264 START TEST nvmf_target_discovery 00:11:29.264 ************************************ 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:29.264 * Looking for test storage... 00:11:29.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.264 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.265 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.545 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:34.546 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:34.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:34.546 Found net devices under 0000:86:00.0: cvl_0_0 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:34.546 Found net devices under 0000:86:00.1: cvl_0_1 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.546 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.806 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:11:34.806 00:11:34.806 --- 10.0.0.2 ping statistics --- 00:11:34.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.806 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:11:34.806 00:11:34.806 --- 10.0.0.1 ping statistics --- 00:11:34.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.806 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2888869 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2888869 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2888869 ']' 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.806 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 [2024-07-26 13:53:02.085005] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:11:34.806 [2024-07-26 13:53:02.085061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.806 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.806 [2024-07-26 13:53:02.143695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.806 [2024-07-26 13:53:02.224799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.806 [2024-07-26 13:53:02.224835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.806 [2024-07-26 13:53:02.224842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.806 [2024-07-26 13:53:02.224848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.806 [2024-07-26 13:53:02.224853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.806 [2024-07-26 13:53:02.224895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.806 [2024-07-26 13:53:02.224991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.806 [2024-07-26 13:53:02.225061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.806 [2024-07-26 13:53:02.225062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 [2024-07-26 13:53:02.933505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 Null1 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 [2024-07-26 13:53:02.978975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 Null2 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 Null3 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 Null4 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:35.746 00:11:35.746 Discovery Log Number of Records 6, Generation counter 6 00:11:35.746 =====Discovery Log Entry 0====== 00:11:35.746 trtype: tcp 00:11:35.746 adrfam: ipv4 00:11:35.746 subtype: current discovery subsystem 00:11:35.746 treq: not required 00:11:35.746 portid: 0 00:11:35.746 trsvcid: 4420 00:11:35.746 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.747 traddr: 10.0.0.2 00:11:35.747 eflags: explicit discovery connections, duplicate discovery information 00:11:35.747 sectype: none 00:11:35.747 =====Discovery Log Entry 1====== 00:11:35.747 trtype: tcp 00:11:35.747 adrfam: ipv4 00:11:35.747 subtype: nvme subsystem 00:11:35.747 treq: not required 00:11:35.747 portid: 0 00:11:35.747 trsvcid: 4420 00:11:35.747 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:35.747 traddr: 10.0.0.2 00:11:35.747 eflags: none 00:11:35.747 sectype: none 00:11:35.747 =====Discovery Log Entry 2====== 00:11:35.747 trtype: tcp 00:11:35.747 adrfam: ipv4 00:11:35.747 subtype: nvme subsystem 00:11:35.747 treq: not required 00:11:35.747 portid: 0 00:11:35.747 trsvcid: 4420 00:11:35.747 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:35.747 traddr: 10.0.0.2 00:11:35.747 eflags: none 00:11:35.747 sectype: none 00:11:35.747 =====Discovery Log Entry 3====== 00:11:35.747 trtype: tcp 00:11:35.747 adrfam: ipv4 00:11:35.747 subtype: nvme subsystem 00:11:35.747 treq: not required 00:11:35.747 portid: 0 00:11:35.747 trsvcid: 4420 00:11:35.747 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:35.747 traddr: 10.0.0.2 00:11:35.747 eflags: none 00:11:35.747 sectype: none 00:11:35.747 =====Discovery Log Entry 4====== 00:11:35.747 trtype: tcp 00:11:35.747 adrfam: ipv4 00:11:35.747 subtype: nvme subsystem 00:11:35.747 treq: not required 00:11:35.747 portid: 0 00:11:35.747 trsvcid: 4420 00:11:35.747 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:35.747 traddr: 10.0.0.2 00:11:35.747 eflags: none 00:11:35.747 sectype: none 00:11:35.747 =====Discovery Log Entry 5====== 00:11:35.747 trtype: tcp 00:11:35.747 adrfam: ipv4 00:11:35.747 subtype: discovery subsystem referral 00:11:35.747 treq: not required 00:11:35.747 portid: 0 00:11:35.747 trsvcid: 4430 00:11:35.747 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.747 traddr: 10.0.0.2 00:11:35.747 eflags: none 00:11:35.747 sectype: none 00:11:35.747 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:35.747 Perform nvmf subsystem discovery via RPC 00:11:35.747 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:35.747 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.747 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.747 [ 00:11:35.747 { 00:11:35.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.747 "subtype": "Discovery", 00:11:35.747 "listen_addresses": [ 00:11:35.747 { 00:11:35.747 "trtype": "TCP", 00:11:35.747 "adrfam": "IPv4", 00:11:35.747 "traddr": "10.0.0.2", 00:11:35.747 "trsvcid": "4420" 00:11:35.747 } 00:11:35.747 ], 00:11:35.747 "allow_any_host": true, 00:11:35.747 "hosts": [] 00:11:35.747 }, 00:11:35.747 { 00:11:35.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.747 "subtype": "NVMe", 00:11:35.747 "listen_addresses": [ 00:11:35.747 { 00:11:35.747 "trtype": "TCP", 00:11:35.747 "adrfam": "IPv4", 00:11:35.747 "traddr": "10.0.0.2", 00:11:35.747 "trsvcid": "4420" 00:11:35.747 } 00:11:35.747 ], 00:11:35.747 "allow_any_host": true, 00:11:35.747 "hosts": [], 00:11:35.747 "serial_number": "SPDK00000000000001", 00:11:35.747 "model_number": "SPDK bdev Controller", 00:11:35.747 "max_namespaces": 32, 00:11:35.747 "min_cntlid": 1, 00:11:35.747 "max_cntlid": 65519, 00:11:35.747 "namespaces": [ 00:11:35.747 { 00:11:35.747 "nsid": 1, 00:11:35.747 "bdev_name": "Null1", 00:11:35.747 "name": "Null1", 00:11:35.747 "nguid": "447C0973161740CFA3ABA1044CC61C04", 00:11:35.747 "uuid": "447c0973-1617-40cf-a3ab-a1044cc61c04" 00:11:35.747 } 00:11:35.747 ] 00:11:35.747 }, 00:11:35.747 { 00:11:35.747 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:36.006 "subtype": "NVMe", 00:11:36.006 "listen_addresses": [ 00:11:36.006 { 00:11:36.006 "trtype": "TCP", 00:11:36.006 "adrfam": "IPv4", 00:11:36.006 "traddr": "10.0.0.2", 00:11:36.006 "trsvcid": "4420" 00:11:36.006 } 00:11:36.006 ], 00:11:36.006 "allow_any_host": true, 00:11:36.006 "hosts": [], 00:11:36.006 "serial_number": "SPDK00000000000002", 00:11:36.006 "model_number": "SPDK bdev Controller", 00:11:36.006 "max_namespaces": 32, 00:11:36.006 "min_cntlid": 1, 00:11:36.006 "max_cntlid": 65519, 00:11:36.006 "namespaces": [ 00:11:36.006 { 00:11:36.006 "nsid": 1, 00:11:36.006 "bdev_name": "Null2", 00:11:36.006 "name": "Null2", 00:11:36.006 "nguid": "7328E3C386614B64856941292732EBC2", 00:11:36.006 "uuid": "7328e3c3-8661-4b64-8569-41292732ebc2" 00:11:36.006 } 00:11:36.006 ] 00:11:36.006 }, 00:11:36.006 { 00:11:36.006 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:36.006 "subtype": "NVMe", 00:11:36.006 "listen_addresses": [ 00:11:36.006 { 00:11:36.006 "trtype": "TCP", 00:11:36.006 "adrfam": "IPv4", 00:11:36.006 "traddr": "10.0.0.2", 00:11:36.006 "trsvcid": "4420" 00:11:36.006 } 00:11:36.006 ], 00:11:36.006 "allow_any_host": true, 00:11:36.006 "hosts": [], 00:11:36.006 "serial_number": "SPDK00000000000003", 00:11:36.006 "model_number": "SPDK bdev Controller", 00:11:36.006 "max_namespaces": 32, 00:11:36.006 "min_cntlid": 1, 00:11:36.006 "max_cntlid": 65519, 00:11:36.006 "namespaces": [ 00:11:36.006 { 00:11:36.006 "nsid": 1, 00:11:36.006 "bdev_name": "Null3", 00:11:36.006 "name": "Null3", 00:11:36.006 "nguid": "EC0FFB82803344049C362A639EEC5BCA", 00:11:36.006 "uuid": "ec0ffb82-8033-4404-9c36-2a639eec5bca" 00:11:36.006 } 00:11:36.006 ] 00:11:36.006 }, 00:11:36.006 { 00:11:36.006 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:36.006 "subtype": "NVMe", 00:11:36.006 "listen_addresses": [ 00:11:36.006 { 00:11:36.006 "trtype": "TCP", 00:11:36.006 "adrfam": "IPv4", 00:11:36.006 "traddr": "10.0.0.2", 00:11:36.006 "trsvcid": "4420" 00:11:36.006 } 00:11:36.006 ], 00:11:36.006 "allow_any_host": true, 00:11:36.006 "hosts": [], 00:11:36.006 "serial_number": "SPDK00000000000004", 00:11:36.006 "model_number": "SPDK bdev Controller", 00:11:36.006 "max_namespaces": 32, 00:11:36.006 "min_cntlid": 1, 00:11:36.006 "max_cntlid": 65519, 00:11:36.006 "namespaces": [ 00:11:36.006 { 00:11:36.006 "nsid": 1, 00:11:36.006 "bdev_name": "Null4", 00:11:36.006 "name": "Null4", 00:11:36.006 "nguid": "2C8D7822D26A4562B3497B345C7A3BED", 00:11:36.006 "uuid": "2c8d7822-d26a-4562-b349-7b345c7a3bed" 00:11:36.006 } 00:11:36.006 ] 00:11:36.006 } 00:11:36.006 ] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.007 rmmod nvme_tcp 00:11:36.007 rmmod nvme_fabrics 00:11:36.007 rmmod nvme_keyring 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2888869 ']' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2888869 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2888869 ']' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2888869 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2888869 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2888869' 00:11:36.007 killing process with pid 2888869 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2888869 00:11:36.007 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2888869 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.266 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:38.805 00:11:38.805 real 0m9.231s 00:11:38.805 user 0m7.201s 00:11:38.805 sys 0m4.471s 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.805 ************************************ 00:11:38.805 END TEST nvmf_target_discovery 00:11:38.805 ************************************ 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.805 ************************************ 00:11:38.805 START TEST nvmf_referrals 00:11:38.805 ************************************ 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:38.805 * Looking for test storage... 00:11:38.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.805 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:44.082 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:44.082 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:44.082 Found net devices under 0000:86:00.0: cvl_0_0 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:44.082 Found net devices under 0000:86:00.1: cvl_0_1 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.082 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:11:44.083 00:11:44.083 --- 10.0.0.2 ping statistics --- 00:11:44.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.083 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:11:44.083 00:11:44.083 --- 10.0.0.1 ping statistics --- 00:11:44.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.083 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2892425 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2892425 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2892425 ']' 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.083 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.083 [2024-07-26 13:53:10.860612] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:11:44.083 [2024-07-26 13:53:10.860656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.083 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.083 [2024-07-26 13:53:10.916416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.083 [2024-07-26 13:53:10.989674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.083 [2024-07-26 13:53:10.989712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.083 [2024-07-26 13:53:10.989719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.083 [2024-07-26 13:53:10.989724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.083 [2024-07-26 13:53:10.989729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.083 [2024-07-26 13:53:10.989788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.083 [2024-07-26 13:53:10.989805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.083 [2024-07-26 13:53:10.989893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.083 [2024-07-26 13:53:10.989895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.342 [2024-07-26 13:53:11.707394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.342 [2024-07-26 13:53:11.720787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:44.342 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.343 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.602 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.602 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:44.602 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:44.602 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:44.602 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.602 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.863 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.123 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.383 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:45.383 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.383 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:45.383 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.384 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.644 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.644 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.905 rmmod nvme_tcp 00:11:45.905 rmmod nvme_fabrics 00:11:45.905 rmmod nvme_keyring 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2892425 ']' 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2892425 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2892425 ']' 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2892425 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2892425 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2892425' 00:11:45.905 killing process with pid 2892425 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2892425 00:11:45.905 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2892425 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.165 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.077 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.339 00:11:48.339 real 0m9.759s 00:11:48.339 user 0m11.847s 00:11:48.339 sys 0m4.274s 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.339 ************************************ 00:11:48.339 END TEST nvmf_referrals 00:11:48.339 ************************************ 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.339 ************************************ 00:11:48.339 START TEST nvmf_connect_disconnect 00:11:48.339 ************************************ 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.339 * Looking for test storage... 00:11:48.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.339 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.340 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:53.705 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:53.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:53.705 Found net devices under 0000:86:00.0: cvl_0_0 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:53.705 Found net devices under 0000:86:00.1: cvl_0_1 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.705 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:53.706 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:11:53.706 00:11:53.706 --- 10.0.0.2 ping statistics --- 00:11:53.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.706 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:11:53.706 00:11:53.706 --- 10.0.0.1 ping statistics --- 00:11:53.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.706 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2896488 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2896488 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2896488 ']' 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.706 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.966 [2024-07-26 13:53:21.178300] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:11:53.966 [2024-07-26 13:53:21.178340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.966 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.966 [2024-07-26 13:53:21.236586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.966 [2024-07-26 13:53:21.316896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.966 [2024-07-26 13:53:21.316934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.966 [2024-07-26 13:53:21.316941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.966 [2024-07-26 13:53:21.316947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.966 [2024-07-26 13:53:21.316952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.966 [2024-07-26 13:53:21.316994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.966 [2024-07-26 13:53:21.317082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.966 [2024-07-26 13:53:21.317132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.966 [2024-07-26 13:53:21.317134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.907 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.907 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:54.907 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.907 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.907 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 [2024-07-26 13:53:22.035306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 [2024-07-26 13:53:22.087243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:54.907 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:58.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.406 rmmod nvme_tcp 00:12:11.406 rmmod nvme_fabrics 00:12:11.406 rmmod nvme_keyring 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2896488 ']' 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2896488 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2896488 ']' 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2896488 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2896488 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2896488' 00:12:11.406 killing process with pid 2896488 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2896488 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2896488 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.406 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.407 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.407 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.318 00:12:13.318 real 0m25.069s 00:12:13.318 user 1m10.192s 00:12:13.318 sys 0m5.115s 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.318 ************************************ 00:12:13.318 END TEST nvmf_connect_disconnect 00:12:13.318 ************************************ 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.318 ************************************ 00:12:13.318 START TEST nvmf_multitarget 00:12:13.318 ************************************ 00:12:13.318 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.578 * Looking for test storage... 00:12:13.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.578 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.579 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.579 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.579 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.579 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.579 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.908 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:18.909 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:18.909 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:18.909 Found net devices under 0000:86:00.0: cvl_0_0 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:18.909 Found net devices under 0000:86:00.1: cvl_0_1 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:12:18.909 00:12:18.909 --- 10.0.0.2 ping statistics --- 00:12:18.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.909 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:12:18.909 00:12:18.909 --- 10.0.0.1 ping statistics --- 00:12:18.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.909 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2902643 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2902643 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2902643 ']' 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.909 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.910 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.910 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.910 [2024-07-26 13:53:45.580617] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:12:18.910 [2024-07-26 13:53:45.580658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.910 [2024-07-26 13:53:45.636974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.910 [2024-07-26 13:53:45.716630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.910 [2024-07-26 13:53:45.716667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.910 [2024-07-26 13:53:45.716674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.910 [2024-07-26 13:53:45.716681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.910 [2024-07-26 13:53:45.716685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.910 [2024-07-26 13:53:45.716726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.910 [2024-07-26 13:53:45.716824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.910 [2024-07-26 13:53:45.716918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.910 [2024-07-26 13:53:45.716919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.169 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:19.170 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:19.170 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:19.429 "nvmf_tgt_1" 00:12:19.429 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:19.429 "nvmf_tgt_2" 00:12:19.429 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.429 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:19.429 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:19.429 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:19.689 true 00:12:19.689 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:19.689 true 00:12:19.689 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.689 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.949 rmmod nvme_tcp 00:12:19.949 rmmod nvme_fabrics 00:12:19.949 rmmod nvme_keyring 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2902643 ']' 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2902643 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2902643 ']' 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2902643 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2902643 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2902643' 00:12:19.949 killing process with pid 2902643 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2902643 00:12:19.949 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2902643 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.209 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.117 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.117 00:12:22.117 real 0m8.802s 00:12:22.117 user 0m8.824s 00:12:22.117 sys 0m4.018s 00:12:22.117 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.117 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.117 ************************************ 00:12:22.117 END TEST nvmf_multitarget 00:12:22.117 ************************************ 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.378 ************************************ 00:12:22.378 START TEST nvmf_rpc 00:12:22.378 ************************************ 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:22.378 * Looking for test storage... 00:12:22.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:22.378 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.379 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.660 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.661 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.661 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:12:27.661 00:12:27.661 --- 10.0.0.2 ping statistics --- 00:12:27.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.661 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:12:27.661 00:12:27.661 --- 10.0.0.1 ping statistics --- 00:12:27.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.661 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.661 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2906416 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2906416 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2906416 ']' 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.661 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.661 [2024-07-26 13:53:55.047805] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:12:27.661 [2024-07-26 13:53:55.047853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.661 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.921 [2024-07-26 13:53:55.107516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.921 [2024-07-26 13:53:55.186494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.921 [2024-07-26 13:53:55.186534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.921 [2024-07-26 13:53:55.186541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.921 [2024-07-26 13:53:55.186547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.921 [2024-07-26 13:53:55.186552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.921 [2024-07-26 13:53:55.186599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.921 [2024-07-26 13:53:55.186622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.921 [2024-07-26 13:53:55.186707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.921 [2024-07-26 13:53:55.186708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:28.488 "tick_rate": 2300000000, 00:12:28.488 "poll_groups": [ 00:12:28.488 { 00:12:28.488 "name": "nvmf_tgt_poll_group_000", 00:12:28.488 "admin_qpairs": 0, 00:12:28.488 "io_qpairs": 0, 00:12:28.488 "current_admin_qpairs": 0, 00:12:28.488 "current_io_qpairs": 0, 00:12:28.488 "pending_bdev_io": 0, 00:12:28.488 "completed_nvme_io": 0, 00:12:28.488 "transports": [] 00:12:28.488 }, 00:12:28.488 { 00:12:28.488 "name": "nvmf_tgt_poll_group_001", 00:12:28.488 "admin_qpairs": 0, 00:12:28.488 "io_qpairs": 0, 00:12:28.488 "current_admin_qpairs": 0, 00:12:28.488 "current_io_qpairs": 0, 00:12:28.488 "pending_bdev_io": 0, 00:12:28.488 "completed_nvme_io": 0, 00:12:28.488 "transports": [] 00:12:28.488 }, 00:12:28.488 { 00:12:28.488 "name": "nvmf_tgt_poll_group_002", 00:12:28.488 "admin_qpairs": 0, 00:12:28.488 "io_qpairs": 0, 00:12:28.488 "current_admin_qpairs": 0, 00:12:28.488 "current_io_qpairs": 0, 00:12:28.488 "pending_bdev_io": 0, 00:12:28.488 "completed_nvme_io": 0, 00:12:28.488 "transports": [] 00:12:28.488 }, 00:12:28.488 { 00:12:28.488 "name": "nvmf_tgt_poll_group_003", 00:12:28.488 "admin_qpairs": 0, 00:12:28.488 "io_qpairs": 0, 00:12:28.488 "current_admin_qpairs": 0, 00:12:28.488 "current_io_qpairs": 0, 00:12:28.488 "pending_bdev_io": 0, 00:12:28.488 "completed_nvme_io": 0, 00:12:28.488 "transports": [] 00:12:28.488 } 00:12:28.488 ] 00:12:28.488 }' 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:28.488 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:28.747 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:28.747 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.747 [2024-07-26 13:53:56.008688] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.747 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:28.747 "tick_rate": 2300000000, 00:12:28.747 "poll_groups": [ 00:12:28.747 { 00:12:28.747 "name": "nvmf_tgt_poll_group_000", 00:12:28.747 "admin_qpairs": 0, 00:12:28.747 "io_qpairs": 0, 00:12:28.747 "current_admin_qpairs": 0, 00:12:28.747 "current_io_qpairs": 0, 00:12:28.747 "pending_bdev_io": 0, 00:12:28.747 "completed_nvme_io": 0, 00:12:28.747 "transports": [ 00:12:28.747 { 00:12:28.747 "trtype": "TCP" 00:12:28.747 } 00:12:28.747 ] 00:12:28.747 }, 00:12:28.747 { 00:12:28.747 "name": "nvmf_tgt_poll_group_001", 00:12:28.747 "admin_qpairs": 0, 00:12:28.747 "io_qpairs": 0, 00:12:28.747 "current_admin_qpairs": 0, 00:12:28.747 "current_io_qpairs": 0, 00:12:28.747 "pending_bdev_io": 0, 00:12:28.747 "completed_nvme_io": 0, 00:12:28.747 "transports": [ 00:12:28.747 { 00:12:28.747 "trtype": "TCP" 00:12:28.747 } 00:12:28.747 ] 00:12:28.747 }, 00:12:28.747 { 00:12:28.747 "name": "nvmf_tgt_poll_group_002", 00:12:28.747 "admin_qpairs": 0, 00:12:28.747 "io_qpairs": 0, 00:12:28.747 "current_admin_qpairs": 0, 00:12:28.747 "current_io_qpairs": 0, 00:12:28.747 "pending_bdev_io": 0, 00:12:28.748 "completed_nvme_io": 0, 00:12:28.748 "transports": [ 00:12:28.748 { 00:12:28.748 "trtype": "TCP" 00:12:28.748 } 00:12:28.748 ] 00:12:28.748 }, 00:12:28.748 { 00:12:28.748 "name": "nvmf_tgt_poll_group_003", 00:12:28.748 "admin_qpairs": 0, 00:12:28.748 "io_qpairs": 0, 00:12:28.748 "current_admin_qpairs": 0, 00:12:28.748 "current_io_qpairs": 0, 00:12:28.748 "pending_bdev_io": 0, 00:12:28.748 "completed_nvme_io": 0, 00:12:28.748 "transports": [ 00:12:28.748 { 00:12:28.748 "trtype": "TCP" 00:12:28.748 } 00:12:28.748 ] 00:12:28.748 } 00:12:28.748 ] 00:12:28.748 }' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 Malloc1 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 [2024-07-26 13:53:56.180811] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:29.008 [2024-07-26 13:53:56.205657] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:29.008 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.008 could not add new controller: failed to write to nvme-fabrics device 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.008 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.947 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.947 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.947 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.947 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:29.947 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:32.487 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.488 [2024-07-26 13:53:59.488372] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:32.488 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.488 could not add new controller: failed to write to nvme-fabrics device 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.488 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.426 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.426 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.426 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.426 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:33.426 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 [2024-07-26 13:54:02.737563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.335 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.751 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.751 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.751 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.751 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:36.751 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 [2024-07-26 13:54:05.980226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.662 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.043 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.043 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.043 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.043 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:40.044 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:41.953 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:41.953 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:41.953 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.953 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:41.953 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.953 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.954 [2024-07-26 13:54:09.286356] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.954 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.333 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.333 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.333 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.333 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:43.333 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.241 [2024-07-26 13:54:12.536703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.241 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.620 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.620 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.620 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.620 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.620 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 [2024-07-26 13:54:15.878550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.595 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.973 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.973 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.973 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.973 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.973 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.923 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.923 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.923 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.923 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.923 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.923 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.923 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.923 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.923 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 [2024-07-26 13:54:19.184643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 [2024-07-26 13:54:19.232755] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 [2024-07-26 13:54:19.284919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.924 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 [2024-07-26 13:54:19.333078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.925 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.185 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 [2024-07-26 13:54:19.381241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:52.186 "tick_rate": 2300000000, 00:12:52.186 "poll_groups": [ 00:12:52.186 { 00:12:52.186 "name": "nvmf_tgt_poll_group_000", 00:12:52.186 "admin_qpairs": 2, 00:12:52.186 "io_qpairs": 168, 00:12:52.186 "current_admin_qpairs": 0, 00:12:52.186 "current_io_qpairs": 0, 00:12:52.186 "pending_bdev_io": 0, 00:12:52.186 "completed_nvme_io": 218, 00:12:52.186 "transports": [ 00:12:52.186 { 00:12:52.186 "trtype": "TCP" 00:12:52.186 } 00:12:52.186 ] 00:12:52.186 }, 00:12:52.186 { 00:12:52.186 "name": "nvmf_tgt_poll_group_001", 00:12:52.186 "admin_qpairs": 2, 00:12:52.186 "io_qpairs": 168, 00:12:52.186 "current_admin_qpairs": 0, 00:12:52.186 "current_io_qpairs": 0, 00:12:52.186 "pending_bdev_io": 0, 00:12:52.186 "completed_nvme_io": 268, 00:12:52.186 "transports": [ 00:12:52.186 { 00:12:52.186 "trtype": "TCP" 00:12:52.186 } 00:12:52.186 ] 00:12:52.186 }, 00:12:52.186 { 00:12:52.186 "name": "nvmf_tgt_poll_group_002", 00:12:52.186 "admin_qpairs": 1, 00:12:52.186 "io_qpairs": 168, 00:12:52.186 "current_admin_qpairs": 0, 00:12:52.186 "current_io_qpairs": 0, 00:12:52.186 "pending_bdev_io": 0, 00:12:52.186 "completed_nvme_io": 269, 00:12:52.186 "transports": [ 00:12:52.186 { 00:12:52.186 "trtype": "TCP" 00:12:52.186 } 00:12:52.186 ] 00:12:52.186 }, 00:12:52.186 { 00:12:52.186 "name": "nvmf_tgt_poll_group_003", 00:12:52.186 "admin_qpairs": 2, 00:12:52.186 "io_qpairs": 168, 00:12:52.186 "current_admin_qpairs": 0, 00:12:52.186 "current_io_qpairs": 0, 00:12:52.186 "pending_bdev_io": 0, 00:12:52.186 "completed_nvme_io": 267, 00:12:52.186 "transports": [ 00:12:52.186 { 00:12:52.186 "trtype": "TCP" 00:12:52.186 } 00:12:52.186 ] 00:12:52.186 } 00:12:52.186 ] 00:12:52.186 }' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.186 rmmod nvme_tcp 00:12:52.186 rmmod nvme_fabrics 00:12:52.186 rmmod nvme_keyring 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2906416 ']' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2906416 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2906416 ']' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2906416 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.186 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2906416 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2906416' 00:12:52.446 killing process with pid 2906416 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2906416 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2906416 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.446 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.985 00:12:54.985 real 0m32.313s 00:12:54.985 user 1m39.727s 00:12:54.985 sys 0m5.572s 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.985 ************************************ 00:12:54.985 END TEST nvmf_rpc 00:12:54.985 ************************************ 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.985 ************************************ 00:12:54.985 START TEST nvmf_invalid 00:12:54.985 ************************************ 00:12:54.985 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.985 * Looking for test storage... 00:12:54.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.985 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.985 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:54.985 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.986 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.306 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.306 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.306 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.306 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:13:00.307 00:13:00.307 --- 10.0.0.2 ping statistics --- 00:13:00.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.307 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.460 ms 00:13:00.307 00:13:00.307 --- 10.0.0.1 ping statistics --- 00:13:00.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.307 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2914531 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2914531 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2914531 ']' 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.307 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.307 [2024-07-26 13:54:27.591619] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:13:00.307 [2024-07-26 13:54:27.591666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.307 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.307 [2024-07-26 13:54:27.649446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.307 [2024-07-26 13:54:27.730172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.307 [2024-07-26 13:54:27.730206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.307 [2024-07-26 13:54:27.730213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.307 [2024-07-26 13:54:27.730220] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.307 [2024-07-26 13:54:27.730225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.307 [2024-07-26 13:54:27.730258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.307 [2024-07-26 13:54:27.730362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.307 [2024-07-26 13:54:27.730439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.307 [2024-07-26 13:54:27.730440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1133 00:13:01.246 [2024-07-26 13:54:28.594967] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:01.246 { 00:13:01.246 "nqn": "nqn.2016-06.io.spdk:cnode1133", 00:13:01.246 "tgt_name": "foobar", 00:13:01.246 "method": "nvmf_create_subsystem", 00:13:01.246 "req_id": 1 00:13:01.246 } 00:13:01.246 Got JSON-RPC error response 00:13:01.246 response: 00:13:01.246 { 00:13:01.246 "code": -32603, 00:13:01.246 "message": "Unable to find target foobar" 00:13:01.246 }' 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:01.246 { 00:13:01.246 "nqn": "nqn.2016-06.io.spdk:cnode1133", 00:13:01.246 "tgt_name": "foobar", 00:13:01.246 "method": "nvmf_create_subsystem", 00:13:01.246 "req_id": 1 00:13:01.246 } 00:13:01.246 Got JSON-RPC error response 00:13:01.246 response: 00:13:01.246 { 00:13:01.246 "code": -32603, 00:13:01.246 "message": "Unable to find target foobar" 00:13:01.246 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:01.246 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11212 00:13:01.506 [2024-07-26 13:54:28.783665] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11212: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:01.506 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:01.506 { 00:13:01.506 "nqn": "nqn.2016-06.io.spdk:cnode11212", 00:13:01.506 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:01.506 "method": "nvmf_create_subsystem", 00:13:01.506 "req_id": 1 00:13:01.506 } 00:13:01.506 Got JSON-RPC error response 00:13:01.506 response: 00:13:01.506 { 00:13:01.506 "code": -32602, 00:13:01.506 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:01.506 }' 00:13:01.506 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:01.506 { 00:13:01.506 "nqn": "nqn.2016-06.io.spdk:cnode11212", 00:13:01.506 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:01.506 "method": "nvmf_create_subsystem", 00:13:01.506 "req_id": 1 00:13:01.506 } 00:13:01.506 Got JSON-RPC error response 00:13:01.506 response: 00:13:01.506 { 00:13:01.506 "code": -32602, 00:13:01.506 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:01.506 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.506 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:01.506 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19193 00:13:01.766 [2024-07-26 13:54:28.972282] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19193: invalid model number 'SPDK_Controller' 00:13:01.766 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:01.766 { 00:13:01.766 "nqn": "nqn.2016-06.io.spdk:cnode19193", 00:13:01.766 "model_number": "SPDK_Controller\u001f", 00:13:01.766 "method": "nvmf_create_subsystem", 00:13:01.766 "req_id": 1 00:13:01.766 } 00:13:01.766 Got JSON-RPC error response 00:13:01.766 response: 00:13:01.766 { 00:13:01.766 "code": -32602, 00:13:01.766 "message": "Invalid MN SPDK_Controller\u001f" 00:13:01.766 }' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:01.766 { 00:13:01.766 "nqn": "nqn.2016-06.io.spdk:cnode19193", 00:13:01.766 "model_number": "SPDK_Controller\u001f", 00:13:01.766 "method": "nvmf_create_subsystem", 00:13:01.766 "req_id": 1 00:13:01.766 } 00:13:01.766 Got JSON-RPC error response 00:13:01.766 response: 00:13:01.766 { 00:13:01.766 "code": -32602, 00:13:01.766 "message": "Invalid MN SPDK_Controller\u001f" 00:13:01.766 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:01.766 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']Dyb%2`;EO8U[Vxh|qENk' 00:13:01.767 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']Dyb%2`;EO8U[Vxh|qENk' nqn.2016-06.io.spdk:cnode4021 00:13:02.027 [2024-07-26 13:54:29.301416] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4021: invalid serial number ']Dyb%2`;EO8U[Vxh|qENk' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:02.027 { 00:13:02.027 "nqn": "nqn.2016-06.io.spdk:cnode4021", 00:13:02.027 "serial_number": "]Dyb%2`;EO8U[Vxh|qENk", 00:13:02.027 "method": "nvmf_create_subsystem", 00:13:02.027 "req_id": 1 00:13:02.027 } 00:13:02.027 Got JSON-RPC error response 00:13:02.027 response: 00:13:02.027 { 00:13:02.027 "code": -32602, 00:13:02.027 "message": "Invalid SN ]Dyb%2`;EO8U[Vxh|qENk" 00:13:02.027 }' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:02.027 { 00:13:02.027 "nqn": "nqn.2016-06.io.spdk:cnode4021", 00:13:02.027 "serial_number": "]Dyb%2`;EO8U[Vxh|qENk", 00:13:02.027 "method": "nvmf_create_subsystem", 00:13:02.027 "req_id": 1 00:13:02.027 } 00:13:02.027 Got JSON-RPC error response 00:13:02.027 response: 00:13:02.027 { 00:13:02.027 "code": -32602, 00:13:02.027 "message": "Invalid SN ]Dyb%2`;EO8U[Vxh|qENk" 00:13:02.027 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.027 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.028 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:02.287 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'l/5)Z4y=UL[#Ct3oLBLSup$hM"{DE/G`+&EJ`g^]y' 00:13:02.288 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'l/5)Z4y=UL[#Ct3oLBLSup$hM"{DE/G`+&EJ`g^]y' nqn.2016-06.io.spdk:cnode8113 00:13:02.548 [2024-07-26 13:54:29.754960] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8113: invalid model number 'l/5)Z4y=UL[#Ct3oLBLSup$hM"{DE/G`+&EJ`g^]y' 00:13:02.548 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:02.548 { 00:13:02.548 "nqn": "nqn.2016-06.io.spdk:cnode8113", 00:13:02.548 "model_number": "l/5)Z4y=UL[#Ct3oLBLSup$hM\"{DE/G`+&EJ`g^]y", 00:13:02.548 "method": "nvmf_create_subsystem", 00:13:02.548 "req_id": 1 00:13:02.548 } 00:13:02.548 Got JSON-RPC error response 00:13:02.548 response: 00:13:02.548 { 00:13:02.548 "code": -32602, 00:13:02.548 "message": "Invalid MN l/5)Z4y=UL[#Ct3oLBLSup$hM\"{DE/G`+&EJ`g^]y" 00:13:02.548 }' 00:13:02.548 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:02.548 { 00:13:02.548 "nqn": "nqn.2016-06.io.spdk:cnode8113", 00:13:02.548 "model_number": "l/5)Z4y=UL[#Ct3oLBLSup$hM\"{DE/G`+&EJ`g^]y", 00:13:02.548 "method": "nvmf_create_subsystem", 00:13:02.548 "req_id": 1 00:13:02.548 } 00:13:02.548 Got JSON-RPC error response 00:13:02.548 response: 00:13:02.548 { 00:13:02.548 "code": -32602, 00:13:02.548 "message": "Invalid MN l/5)Z4y=UL[#Ct3oLBLSup$hM\"{DE/G`+&EJ`g^]y" 00:13:02.548 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.548 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:02.548 [2024-07-26 13:54:29.939651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.548 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:02.808 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:02.808 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:02.808 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:02.808 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:02.808 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:03.067 [2024-07-26 13:54:30.320910] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:03.067 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:03.067 { 00:13:03.067 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.067 "listen_address": { 00:13:03.067 "trtype": "tcp", 00:13:03.067 "traddr": "", 00:13:03.067 "trsvcid": "4421" 00:13:03.067 }, 00:13:03.067 "method": "nvmf_subsystem_remove_listener", 00:13:03.067 "req_id": 1 00:13:03.067 } 00:13:03.067 Got JSON-RPC error response 00:13:03.067 response: 00:13:03.067 { 00:13:03.067 "code": -32602, 00:13:03.067 "message": "Invalid parameters" 00:13:03.067 }' 00:13:03.067 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:03.067 { 00:13:03.067 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.067 "listen_address": { 00:13:03.067 "trtype": "tcp", 00:13:03.067 "traddr": "", 00:13:03.067 "trsvcid": "4421" 00:13:03.067 }, 00:13:03.067 "method": "nvmf_subsystem_remove_listener", 00:13:03.067 "req_id": 1 00:13:03.067 } 00:13:03.067 Got JSON-RPC error response 00:13:03.067 response: 00:13:03.067 { 00:13:03.067 "code": -32602, 00:13:03.067 "message": "Invalid parameters" 00:13:03.067 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:03.067 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10307 -i 0 00:13:03.328 [2024-07-26 13:54:30.505509] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10307: invalid cntlid range [0-65519] 00:13:03.328 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:03.328 { 00:13:03.328 "nqn": "nqn.2016-06.io.spdk:cnode10307", 00:13:03.328 "min_cntlid": 0, 00:13:03.328 "method": "nvmf_create_subsystem", 00:13:03.328 "req_id": 1 00:13:03.328 } 00:13:03.328 Got JSON-RPC error response 00:13:03.328 response: 00:13:03.328 { 00:13:03.328 "code": -32602, 00:13:03.328 "message": "Invalid cntlid range [0-65519]" 00:13:03.328 }' 00:13:03.328 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:03.328 { 00:13:03.328 "nqn": "nqn.2016-06.io.spdk:cnode10307", 00:13:03.328 "min_cntlid": 0, 00:13:03.328 "method": "nvmf_create_subsystem", 00:13:03.328 "req_id": 1 00:13:03.328 } 00:13:03.328 Got JSON-RPC error response 00:13:03.328 response: 00:13:03.328 { 00:13:03.328 "code": -32602, 00:13:03.328 "message": "Invalid cntlid range [0-65519]" 00:13:03.328 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.328 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30583 -i 65520 00:13:03.328 [2024-07-26 13:54:30.698124] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30583: invalid cntlid range [65520-65519] 00:13:03.328 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:03.328 { 00:13:03.328 "nqn": "nqn.2016-06.io.spdk:cnode30583", 00:13:03.328 "min_cntlid": 65520, 00:13:03.328 "method": "nvmf_create_subsystem", 00:13:03.328 "req_id": 1 00:13:03.328 } 00:13:03.328 Got JSON-RPC error response 00:13:03.328 response: 00:13:03.328 { 00:13:03.328 "code": -32602, 00:13:03.328 "message": "Invalid cntlid range [65520-65519]" 00:13:03.328 }' 00:13:03.328 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:03.328 { 00:13:03.328 "nqn": "nqn.2016-06.io.spdk:cnode30583", 00:13:03.328 "min_cntlid": 65520, 00:13:03.328 "method": "nvmf_create_subsystem", 00:13:03.328 "req_id": 1 00:13:03.328 } 00:13:03.328 Got JSON-RPC error response 00:13:03.328 response: 00:13:03.328 { 00:13:03.328 "code": -32602, 00:13:03.328 "message": "Invalid cntlid range [65520-65519]" 00:13:03.328 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.328 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7979 -I 0 00:13:03.588 [2024-07-26 13:54:30.894824] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7979: invalid cntlid range [1-0] 00:13:03.588 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:03.588 { 00:13:03.588 "nqn": "nqn.2016-06.io.spdk:cnode7979", 00:13:03.588 "max_cntlid": 0, 00:13:03.588 "method": "nvmf_create_subsystem", 00:13:03.588 "req_id": 1 00:13:03.588 } 00:13:03.588 Got JSON-RPC error response 00:13:03.588 response: 00:13:03.588 { 00:13:03.588 "code": -32602, 00:13:03.588 "message": "Invalid cntlid range [1-0]" 00:13:03.588 }' 00:13:03.588 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:03.588 { 00:13:03.588 "nqn": "nqn.2016-06.io.spdk:cnode7979", 00:13:03.588 "max_cntlid": 0, 00:13:03.588 "method": "nvmf_create_subsystem", 00:13:03.588 "req_id": 1 00:13:03.588 } 00:13:03.588 Got JSON-RPC error response 00:13:03.588 response: 00:13:03.588 { 00:13:03.588 "code": -32602, 00:13:03.588 "message": "Invalid cntlid range [1-0]" 00:13:03.588 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.588 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16531 -I 65520 00:13:03.849 [2024-07-26 13:54:31.087438] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16531: invalid cntlid range [1-65520] 00:13:03.849 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:03.849 { 00:13:03.849 "nqn": "nqn.2016-06.io.spdk:cnode16531", 00:13:03.849 "max_cntlid": 65520, 00:13:03.849 "method": "nvmf_create_subsystem", 00:13:03.849 "req_id": 1 00:13:03.849 } 00:13:03.849 Got JSON-RPC error response 00:13:03.849 response: 00:13:03.849 { 00:13:03.849 "code": -32602, 00:13:03.849 "message": "Invalid cntlid range [1-65520]" 00:13:03.849 }' 00:13:03.849 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:03.849 { 00:13:03.849 "nqn": "nqn.2016-06.io.spdk:cnode16531", 00:13:03.849 "max_cntlid": 65520, 00:13:03.849 "method": "nvmf_create_subsystem", 00:13:03.849 "req_id": 1 00:13:03.849 } 00:13:03.849 Got JSON-RPC error response 00:13:03.849 response: 00:13:03.849 { 00:13:03.849 "code": -32602, 00:13:03.849 "message": "Invalid cntlid range [1-65520]" 00:13:03.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.849 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23457 -i 6 -I 5 00:13:03.849 [2024-07-26 13:54:31.276082] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23457: invalid cntlid range [6-5] 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:04.110 { 00:13:04.110 "nqn": "nqn.2016-06.io.spdk:cnode23457", 00:13:04.110 "min_cntlid": 6, 00:13:04.110 "max_cntlid": 5, 00:13:04.110 "method": "nvmf_create_subsystem", 00:13:04.110 "req_id": 1 00:13:04.110 } 00:13:04.110 Got JSON-RPC error response 00:13:04.110 response: 00:13:04.110 { 00:13:04.110 "code": -32602, 00:13:04.110 "message": "Invalid cntlid range [6-5]" 00:13:04.110 }' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:04.110 { 00:13:04.110 "nqn": "nqn.2016-06.io.spdk:cnode23457", 00:13:04.110 "min_cntlid": 6, 00:13:04.110 "max_cntlid": 5, 00:13:04.110 "method": "nvmf_create_subsystem", 00:13:04.110 "req_id": 1 00:13:04.110 } 00:13:04.110 Got JSON-RPC error response 00:13:04.110 response: 00:13:04.110 { 00:13:04.110 "code": -32602, 00:13:04.110 "message": "Invalid cntlid range [6-5]" 00:13:04.110 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:04.110 { 00:13:04.110 "name": "foobar", 00:13:04.110 "method": "nvmf_delete_target", 00:13:04.110 "req_id": 1 00:13:04.110 } 00:13:04.110 Got JSON-RPC error response 00:13:04.110 response: 00:13:04.110 { 00:13:04.110 "code": -32602, 00:13:04.110 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:04.110 }' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:04.110 { 00:13:04.110 "name": "foobar", 00:13:04.110 "method": "nvmf_delete_target", 00:13:04.110 "req_id": 1 00:13:04.110 } 00:13:04.110 Got JSON-RPC error response 00:13:04.110 response: 00:13:04.110 { 00:13:04.110 "code": -32602, 00:13:04.110 "message": "The specified target doesn't exist, cannot delete it." 00:13:04.110 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.110 rmmod nvme_tcp 00:13:04.110 rmmod nvme_fabrics 00:13:04.110 rmmod nvme_keyring 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2914531 ']' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2914531 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2914531 ']' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2914531 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2914531 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2914531' 00:13:04.110 killing process with pid 2914531 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2914531 00:13:04.110 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2914531 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.370 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.915 00:13:06.915 real 0m11.764s 00:13:06.915 user 0m19.589s 00:13:06.915 sys 0m4.991s 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.915 ************************************ 00:13:06.915 END TEST nvmf_invalid 00:13:06.915 ************************************ 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.915 ************************************ 00:13:06.915 START TEST nvmf_connect_stress 00:13:06.915 ************************************ 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:06.915 * Looking for test storage... 00:13:06.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.915 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:11.117 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:11.117 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:11.117 Found net devices under 0000:86:00.0: cvl_0_0 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:11.117 Found net devices under 0000:86:00.1: cvl_0_1 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.117 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:11.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:11.378 00:13:11.378 --- 10.0.0.2 ping statistics --- 00:13:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.378 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:13:11.378 00:13:11.378 --- 10.0.0.1 ping statistics --- 00:13:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.378 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2918687 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2918687 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2918687 ']' 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.378 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:11.378 [2024-07-26 13:54:38.784170] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:13:11.378 [2024-07-26 13:54:38.784216] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.378 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.638 [2024-07-26 13:54:38.840803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.638 [2024-07-26 13:54:38.921127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.638 [2024-07-26 13:54:38.921162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.638 [2024-07-26 13:54:38.921169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.638 [2024-07-26 13:54:38.921175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.638 [2024-07-26 13:54:38.921181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.638 [2024-07-26 13:54:38.921224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.638 [2024-07-26 13:54:38.921310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.638 [2024-07-26 13:54:38.921312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.207 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.207 [2024-07-26 13:54:39.636569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.468 [2024-07-26 13:54:39.666976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.468 NULL1 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2918933 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:12.468 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.469 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.729 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.729 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:12.729 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.729 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.729 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.989 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.989 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:12.989 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.989 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.989 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.560 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.560 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:13.560 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.560 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.560 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.820 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.820 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:13.820 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.820 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.820 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.080 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.080 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:14.080 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.080 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.080 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.340 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.341 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:14.341 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.341 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.341 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.911 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.911 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:14.911 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.911 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.911 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.209 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.209 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:15.209 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.209 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.209 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.556 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.556 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:15.556 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.556 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.556 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.816 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.816 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:15.816 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.816 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.816 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.076 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:16.076 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.076 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.076 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.336 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.336 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:16.336 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.336 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.336 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.595 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.595 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:16.595 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.595 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.595 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.165 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.165 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:17.165 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.165 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.165 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.425 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.425 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:17.425 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.425 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.425 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.685 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.685 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:17.685 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.685 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.685 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.945 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.945 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:17.945 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.945 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.945 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.205 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.205 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:18.205 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.205 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.205 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.774 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.774 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:18.774 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.774 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.774 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.034 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.034 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:19.034 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.034 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.034 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.294 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.294 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:19.294 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.294 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.294 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.554 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.554 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:19.554 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.554 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.554 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.124 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.124 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:20.124 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.124 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.124 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.384 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.384 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:20.384 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.384 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.384 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.644 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.644 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:20.644 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.644 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.644 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:20.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:21.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.733 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.733 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:21.733 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.733 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.733 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.993 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.993 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:21.993 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.993 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.993 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.253 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.253 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:22.253 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.253 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.253 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.513 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2918933 00:13:22.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2918933) - No such process 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2918933 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.513 rmmod nvme_tcp 00:13:22.513 rmmod nvme_fabrics 00:13:22.513 rmmod nvme_keyring 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2918687 ']' 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2918687 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2918687 ']' 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2918687 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.513 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2918687 00:13:22.774 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:22.774 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:22.774 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2918687' 00:13:22.774 killing process with pid 2918687 00:13:22.774 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2918687 00:13:22.774 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2918687 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.774 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.312 00:13:25.312 real 0m18.437s 00:13:25.312 user 0m40.763s 00:13:25.312 sys 0m7.653s 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.312 ************************************ 00:13:25.312 END TEST nvmf_connect_stress 00:13:25.312 ************************************ 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.312 ************************************ 00:13:25.312 START TEST nvmf_fused_ordering 00:13:25.312 ************************************ 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.312 * Looking for test storage... 00:13:25.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.312 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.313 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.592 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.592 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.592 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.592 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.592 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:13:30.593 00:13:30.593 --- 10.0.0.2 ping statistics --- 00:13:30.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.593 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:13:30.593 00:13:30.593 --- 10.0.0.1 ping statistics --- 00:13:30.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.593 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2924075 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2924075 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2924075 ']' 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.593 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.593 [2024-07-26 13:54:57.980409] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:13:30.593 [2024-07-26 13:54:57.980460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.593 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.853 [2024-07-26 13:54:58.038703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.853 [2024-07-26 13:54:58.114433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.853 [2024-07-26 13:54:58.114467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.853 [2024-07-26 13:54:58.114474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.853 [2024-07-26 13:54:58.114480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.853 [2024-07-26 13:54:58.114485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.853 [2024-07-26 13:54:58.114503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.424 [2024-07-26 13:54:58.838537] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.424 [2024-07-26 13:54:58.854701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.424 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.684 NULL1 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.684 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:31.685 [2024-07-26 13:54:58.906443] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:13:31.685 [2024-07-26 13:54:58.906475] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924318 ] 00:13:31.685 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.623 Attached to nqn.2016-06.io.spdk:cnode1 00:13:32.623 Namespace ID: 1 size: 1GB 00:13:32.623 fused_ordering(0) 00:13:32.623 fused_ordering(1) 00:13:32.623 fused_ordering(2) 00:13:32.623 fused_ordering(3) 00:13:32.623 fused_ordering(4) 00:13:32.623 fused_ordering(5) 00:13:32.623 fused_ordering(6) 00:13:32.623 fused_ordering(7) 00:13:32.623 fused_ordering(8) 00:13:32.623 fused_ordering(9) 00:13:32.623 fused_ordering(10) 00:13:32.623 fused_ordering(11) 00:13:32.623 fused_ordering(12) 00:13:32.623 fused_ordering(13) 00:13:32.623 fused_ordering(14) 00:13:32.623 fused_ordering(15) 00:13:32.623 fused_ordering(16) 00:13:32.623 fused_ordering(17) 00:13:32.623 fused_ordering(18) 00:13:32.623 fused_ordering(19) 00:13:32.623 fused_ordering(20) 00:13:32.623 fused_ordering(21) 00:13:32.623 fused_ordering(22) 00:13:32.623 fused_ordering(23) 00:13:32.623 fused_ordering(24) 00:13:32.623 fused_ordering(25) 00:13:32.623 fused_ordering(26) 00:13:32.623 fused_ordering(27) 00:13:32.623 fused_ordering(28) 00:13:32.623 fused_ordering(29) 00:13:32.623 fused_ordering(30) 00:13:32.623 fused_ordering(31) 00:13:32.623 fused_ordering(32) 00:13:32.623 fused_ordering(33) 00:13:32.623 fused_ordering(34) 00:13:32.623 fused_ordering(35) 00:13:32.623 fused_ordering(36) 00:13:32.623 fused_ordering(37) 00:13:32.623 fused_ordering(38) 00:13:32.623 fused_ordering(39) 00:13:32.623 fused_ordering(40) 00:13:32.623 fused_ordering(41) 00:13:32.623 fused_ordering(42) 00:13:32.623 fused_ordering(43) 00:13:32.623 fused_ordering(44) 00:13:32.623 fused_ordering(45) 00:13:32.623 fused_ordering(46) 00:13:32.623 fused_ordering(47) 00:13:32.623 fused_ordering(48) 00:13:32.623 fused_ordering(49) 00:13:32.623 fused_ordering(50) 00:13:32.623 fused_ordering(51) 00:13:32.623 fused_ordering(52) 00:13:32.623 fused_ordering(53) 00:13:32.623 fused_ordering(54) 00:13:32.623 fused_ordering(55) 00:13:32.623 fused_ordering(56) 00:13:32.623 fused_ordering(57) 00:13:32.623 fused_ordering(58) 00:13:32.623 fused_ordering(59) 00:13:32.623 fused_ordering(60) 00:13:32.623 fused_ordering(61) 00:13:32.623 fused_ordering(62) 00:13:32.623 fused_ordering(63) 00:13:32.623 fused_ordering(64) 00:13:32.623 fused_ordering(65) 00:13:32.623 fused_ordering(66) 00:13:32.623 fused_ordering(67) 00:13:32.623 fused_ordering(68) 00:13:32.623 fused_ordering(69) 00:13:32.623 fused_ordering(70) 00:13:32.623 fused_ordering(71) 00:13:32.623 fused_ordering(72) 00:13:32.623 fused_ordering(73) 00:13:32.623 fused_ordering(74) 00:13:32.623 fused_ordering(75) 00:13:32.623 fused_ordering(76) 00:13:32.623 fused_ordering(77) 00:13:32.623 fused_ordering(78) 00:13:32.623 fused_ordering(79) 00:13:32.623 fused_ordering(80) 00:13:32.623 fused_ordering(81) 00:13:32.623 fused_ordering(82) 00:13:32.623 fused_ordering(83) 00:13:32.623 fused_ordering(84) 00:13:32.623 fused_ordering(85) 00:13:32.623 fused_ordering(86) 00:13:32.623 fused_ordering(87) 00:13:32.623 fused_ordering(88) 00:13:32.623 fused_ordering(89) 00:13:32.623 fused_ordering(90) 00:13:32.623 fused_ordering(91) 00:13:32.623 fused_ordering(92) 00:13:32.623 fused_ordering(93) 00:13:32.623 fused_ordering(94) 00:13:32.623 fused_ordering(95) 00:13:32.623 fused_ordering(96) 00:13:32.623 fused_ordering(97) 00:13:32.623 fused_ordering(98) 00:13:32.623 fused_ordering(99) 00:13:32.623 fused_ordering(100) 00:13:32.623 fused_ordering(101) 00:13:32.623 fused_ordering(102) 00:13:32.623 fused_ordering(103) 00:13:32.623 fused_ordering(104) 00:13:32.623 fused_ordering(105) 00:13:32.624 fused_ordering(106) 00:13:32.624 fused_ordering(107) 00:13:32.624 fused_ordering(108) 00:13:32.624 fused_ordering(109) 00:13:32.624 fused_ordering(110) 00:13:32.624 fused_ordering(111) 00:13:32.624 fused_ordering(112) 00:13:32.624 fused_ordering(113) 00:13:32.624 fused_ordering(114) 00:13:32.624 fused_ordering(115) 00:13:32.624 fused_ordering(116) 00:13:32.624 fused_ordering(117) 00:13:32.624 fused_ordering(118) 00:13:32.624 fused_ordering(119) 00:13:32.624 fused_ordering(120) 00:13:32.624 fused_ordering(121) 00:13:32.624 fused_ordering(122) 00:13:32.624 fused_ordering(123) 00:13:32.624 fused_ordering(124) 00:13:32.624 fused_ordering(125) 00:13:32.624 fused_ordering(126) 00:13:32.624 fused_ordering(127) 00:13:32.624 fused_ordering(128) 00:13:32.624 fused_ordering(129) 00:13:32.624 fused_ordering(130) 00:13:32.624 fused_ordering(131) 00:13:32.624 fused_ordering(132) 00:13:32.624 fused_ordering(133) 00:13:32.624 fused_ordering(134) 00:13:32.624 fused_ordering(135) 00:13:32.624 fused_ordering(136) 00:13:32.624 fused_ordering(137) 00:13:32.624 fused_ordering(138) 00:13:32.624 fused_ordering(139) 00:13:32.624 fused_ordering(140) 00:13:32.624 fused_ordering(141) 00:13:32.624 fused_ordering(142) 00:13:32.624 fused_ordering(143) 00:13:32.624 fused_ordering(144) 00:13:32.624 fused_ordering(145) 00:13:32.624 fused_ordering(146) 00:13:32.624 fused_ordering(147) 00:13:32.624 fused_ordering(148) 00:13:32.624 fused_ordering(149) 00:13:32.624 fused_ordering(150) 00:13:32.624 fused_ordering(151) 00:13:32.624 fused_ordering(152) 00:13:32.624 fused_ordering(153) 00:13:32.624 fused_ordering(154) 00:13:32.624 fused_ordering(155) 00:13:32.624 fused_ordering(156) 00:13:32.624 fused_ordering(157) 00:13:32.624 fused_ordering(158) 00:13:32.624 fused_ordering(159) 00:13:32.624 fused_ordering(160) 00:13:32.624 fused_ordering(161) 00:13:32.624 fused_ordering(162) 00:13:32.624 fused_ordering(163) 00:13:32.624 fused_ordering(164) 00:13:32.624 fused_ordering(165) 00:13:32.624 fused_ordering(166) 00:13:32.624 fused_ordering(167) 00:13:32.624 fused_ordering(168) 00:13:32.624 fused_ordering(169) 00:13:32.624 fused_ordering(170) 00:13:32.624 fused_ordering(171) 00:13:32.624 fused_ordering(172) 00:13:32.624 fused_ordering(173) 00:13:32.624 fused_ordering(174) 00:13:32.624 fused_ordering(175) 00:13:32.624 fused_ordering(176) 00:13:32.624 fused_ordering(177) 00:13:32.624 fused_ordering(178) 00:13:32.624 fused_ordering(179) 00:13:32.624 fused_ordering(180) 00:13:32.624 fused_ordering(181) 00:13:32.624 fused_ordering(182) 00:13:32.624 fused_ordering(183) 00:13:32.624 fused_ordering(184) 00:13:32.624 fused_ordering(185) 00:13:32.624 fused_ordering(186) 00:13:32.624 fused_ordering(187) 00:13:32.624 fused_ordering(188) 00:13:32.624 fused_ordering(189) 00:13:32.624 fused_ordering(190) 00:13:32.624 fused_ordering(191) 00:13:32.624 fused_ordering(192) 00:13:32.624 fused_ordering(193) 00:13:32.624 fused_ordering(194) 00:13:32.624 fused_ordering(195) 00:13:32.624 fused_ordering(196) 00:13:32.624 fused_ordering(197) 00:13:32.624 fused_ordering(198) 00:13:32.624 fused_ordering(199) 00:13:32.624 fused_ordering(200) 00:13:32.624 fused_ordering(201) 00:13:32.624 fused_ordering(202) 00:13:32.624 fused_ordering(203) 00:13:32.624 fused_ordering(204) 00:13:32.624 fused_ordering(205) 00:13:33.564 fused_ordering(206) 00:13:33.564 fused_ordering(207) 00:13:33.564 fused_ordering(208) 00:13:33.564 fused_ordering(209) 00:13:33.564 fused_ordering(210) 00:13:33.564 fused_ordering(211) 00:13:33.564 fused_ordering(212) 00:13:33.564 fused_ordering(213) 00:13:33.564 fused_ordering(214) 00:13:33.564 fused_ordering(215) 00:13:33.564 fused_ordering(216) 00:13:33.564 fused_ordering(217) 00:13:33.564 fused_ordering(218) 00:13:33.564 fused_ordering(219) 00:13:33.564 fused_ordering(220) 00:13:33.564 fused_ordering(221) 00:13:33.564 fused_ordering(222) 00:13:33.564 fused_ordering(223) 00:13:33.564 fused_ordering(224) 00:13:33.564 fused_ordering(225) 00:13:33.564 fused_ordering(226) 00:13:33.564 fused_ordering(227) 00:13:33.564 fused_ordering(228) 00:13:33.564 fused_ordering(229) 00:13:33.564 fused_ordering(230) 00:13:33.564 fused_ordering(231) 00:13:33.564 fused_ordering(232) 00:13:33.564 fused_ordering(233) 00:13:33.564 fused_ordering(234) 00:13:33.564 fused_ordering(235) 00:13:33.564 fused_ordering(236) 00:13:33.564 fused_ordering(237) 00:13:33.564 fused_ordering(238) 00:13:33.564 fused_ordering(239) 00:13:33.564 fused_ordering(240) 00:13:33.564 fused_ordering(241) 00:13:33.564 fused_ordering(242) 00:13:33.564 fused_ordering(243) 00:13:33.564 fused_ordering(244) 00:13:33.564 fused_ordering(245) 00:13:33.564 fused_ordering(246) 00:13:33.564 fused_ordering(247) 00:13:33.564 fused_ordering(248) 00:13:33.564 fused_ordering(249) 00:13:33.564 fused_ordering(250) 00:13:33.564 fused_ordering(251) 00:13:33.564 fused_ordering(252) 00:13:33.564 fused_ordering(253) 00:13:33.564 fused_ordering(254) 00:13:33.564 fused_ordering(255) 00:13:33.564 fused_ordering(256) 00:13:33.564 fused_ordering(257) 00:13:33.564 fused_ordering(258) 00:13:33.564 fused_ordering(259) 00:13:33.564 fused_ordering(260) 00:13:33.564 fused_ordering(261) 00:13:33.564 fused_ordering(262) 00:13:33.564 fused_ordering(263) 00:13:33.564 fused_ordering(264) 00:13:33.564 fused_ordering(265) 00:13:33.564 fused_ordering(266) 00:13:33.564 fused_ordering(267) 00:13:33.564 fused_ordering(268) 00:13:33.564 fused_ordering(269) 00:13:33.564 fused_ordering(270) 00:13:33.564 fused_ordering(271) 00:13:33.564 fused_ordering(272) 00:13:33.564 fused_ordering(273) 00:13:33.564 fused_ordering(274) 00:13:33.564 fused_ordering(275) 00:13:33.564 fused_ordering(276) 00:13:33.564 fused_ordering(277) 00:13:33.564 fused_ordering(278) 00:13:33.564 fused_ordering(279) 00:13:33.564 fused_ordering(280) 00:13:33.564 fused_ordering(281) 00:13:33.564 fused_ordering(282) 00:13:33.564 fused_ordering(283) 00:13:33.564 fused_ordering(284) 00:13:33.564 fused_ordering(285) 00:13:33.564 fused_ordering(286) 00:13:33.564 fused_ordering(287) 00:13:33.564 fused_ordering(288) 00:13:33.564 fused_ordering(289) 00:13:33.564 fused_ordering(290) 00:13:33.564 fused_ordering(291) 00:13:33.564 fused_ordering(292) 00:13:33.564 fused_ordering(293) 00:13:33.564 fused_ordering(294) 00:13:33.564 fused_ordering(295) 00:13:33.564 fused_ordering(296) 00:13:33.564 fused_ordering(297) 00:13:33.564 fused_ordering(298) 00:13:33.564 fused_ordering(299) 00:13:33.564 fused_ordering(300) 00:13:33.564 fused_ordering(301) 00:13:33.564 fused_ordering(302) 00:13:33.564 fused_ordering(303) 00:13:33.564 fused_ordering(304) 00:13:33.564 fused_ordering(305) 00:13:33.564 fused_ordering(306) 00:13:33.564 fused_ordering(307) 00:13:33.564 fused_ordering(308) 00:13:33.564 fused_ordering(309) 00:13:33.564 fused_ordering(310) 00:13:33.564 fused_ordering(311) 00:13:33.564 fused_ordering(312) 00:13:33.564 fused_ordering(313) 00:13:33.564 fused_ordering(314) 00:13:33.564 fused_ordering(315) 00:13:33.564 fused_ordering(316) 00:13:33.564 fused_ordering(317) 00:13:33.564 fused_ordering(318) 00:13:33.564 fused_ordering(319) 00:13:33.564 fused_ordering(320) 00:13:33.564 fused_ordering(321) 00:13:33.564 fused_ordering(322) 00:13:33.564 fused_ordering(323) 00:13:33.564 fused_ordering(324) 00:13:33.564 fused_ordering(325) 00:13:33.564 fused_ordering(326) 00:13:33.564 fused_ordering(327) 00:13:33.564 fused_ordering(328) 00:13:33.564 fused_ordering(329) 00:13:33.564 fused_ordering(330) 00:13:33.564 fused_ordering(331) 00:13:33.564 fused_ordering(332) 00:13:33.564 fused_ordering(333) 00:13:33.564 fused_ordering(334) 00:13:33.564 fused_ordering(335) 00:13:33.564 fused_ordering(336) 00:13:33.564 fused_ordering(337) 00:13:33.564 fused_ordering(338) 00:13:33.564 fused_ordering(339) 00:13:33.564 fused_ordering(340) 00:13:33.564 fused_ordering(341) 00:13:33.564 fused_ordering(342) 00:13:33.564 fused_ordering(343) 00:13:33.564 fused_ordering(344) 00:13:33.564 fused_ordering(345) 00:13:33.564 fused_ordering(346) 00:13:33.564 fused_ordering(347) 00:13:33.564 fused_ordering(348) 00:13:33.564 fused_ordering(349) 00:13:33.564 fused_ordering(350) 00:13:33.564 fused_ordering(351) 00:13:33.564 fused_ordering(352) 00:13:33.564 fused_ordering(353) 00:13:33.564 fused_ordering(354) 00:13:33.564 fused_ordering(355) 00:13:33.564 fused_ordering(356) 00:13:33.564 fused_ordering(357) 00:13:33.564 fused_ordering(358) 00:13:33.564 fused_ordering(359) 00:13:33.564 fused_ordering(360) 00:13:33.564 fused_ordering(361) 00:13:33.564 fused_ordering(362) 00:13:33.564 fused_ordering(363) 00:13:33.564 fused_ordering(364) 00:13:33.564 fused_ordering(365) 00:13:33.564 fused_ordering(366) 00:13:33.564 fused_ordering(367) 00:13:33.564 fused_ordering(368) 00:13:33.564 fused_ordering(369) 00:13:33.564 fused_ordering(370) 00:13:33.564 fused_ordering(371) 00:13:33.564 fused_ordering(372) 00:13:33.564 fused_ordering(373) 00:13:33.564 fused_ordering(374) 00:13:33.564 fused_ordering(375) 00:13:33.564 fused_ordering(376) 00:13:33.564 fused_ordering(377) 00:13:33.564 fused_ordering(378) 00:13:33.564 fused_ordering(379) 00:13:33.564 fused_ordering(380) 00:13:33.564 fused_ordering(381) 00:13:33.564 fused_ordering(382) 00:13:33.564 fused_ordering(383) 00:13:33.564 fused_ordering(384) 00:13:33.564 fused_ordering(385) 00:13:33.564 fused_ordering(386) 00:13:33.564 fused_ordering(387) 00:13:33.564 fused_ordering(388) 00:13:33.564 fused_ordering(389) 00:13:33.564 fused_ordering(390) 00:13:33.564 fused_ordering(391) 00:13:33.564 fused_ordering(392) 00:13:33.564 fused_ordering(393) 00:13:33.564 fused_ordering(394) 00:13:33.564 fused_ordering(395) 00:13:33.565 fused_ordering(396) 00:13:33.565 fused_ordering(397) 00:13:33.565 fused_ordering(398) 00:13:33.565 fused_ordering(399) 00:13:33.565 fused_ordering(400) 00:13:33.565 fused_ordering(401) 00:13:33.565 fused_ordering(402) 00:13:33.565 fused_ordering(403) 00:13:33.565 fused_ordering(404) 00:13:33.565 fused_ordering(405) 00:13:33.565 fused_ordering(406) 00:13:33.565 fused_ordering(407) 00:13:33.565 fused_ordering(408) 00:13:33.565 fused_ordering(409) 00:13:33.565 fused_ordering(410) 00:13:35.004 fused_ordering(411) 00:13:35.004 fused_ordering(412) 00:13:35.004 fused_ordering(413) 00:13:35.004 fused_ordering(414) 00:13:35.004 fused_ordering(415) 00:13:35.004 fused_ordering(416) 00:13:35.004 fused_ordering(417) 00:13:35.004 fused_ordering(418) 00:13:35.004 fused_ordering(419) 00:13:35.004 fused_ordering(420) 00:13:35.004 fused_ordering(421) 00:13:35.004 fused_ordering(422) 00:13:35.004 fused_ordering(423) 00:13:35.004 fused_ordering(424) 00:13:35.004 fused_ordering(425) 00:13:35.004 fused_ordering(426) 00:13:35.004 fused_ordering(427) 00:13:35.004 fused_ordering(428) 00:13:35.004 fused_ordering(429) 00:13:35.004 fused_ordering(430) 00:13:35.004 fused_ordering(431) 00:13:35.004 fused_ordering(432) 00:13:35.004 fused_ordering(433) 00:13:35.004 fused_ordering(434) 00:13:35.004 fused_ordering(435) 00:13:35.004 fused_ordering(436) 00:13:35.004 fused_ordering(437) 00:13:35.004 fused_ordering(438) 00:13:35.004 fused_ordering(439) 00:13:35.004 fused_ordering(440) 00:13:35.004 fused_ordering(441) 00:13:35.004 fused_ordering(442) 00:13:35.004 fused_ordering(443) 00:13:35.004 fused_ordering(444) 00:13:35.004 fused_ordering(445) 00:13:35.004 fused_ordering(446) 00:13:35.004 fused_ordering(447) 00:13:35.004 fused_ordering(448) 00:13:35.004 fused_ordering(449) 00:13:35.004 fused_ordering(450) 00:13:35.004 fused_ordering(451) 00:13:35.004 fused_ordering(452) 00:13:35.004 fused_ordering(453) 00:13:35.004 fused_ordering(454) 00:13:35.004 fused_ordering(455) 00:13:35.004 fused_ordering(456) 00:13:35.004 fused_ordering(457) 00:13:35.004 fused_ordering(458) 00:13:35.004 fused_ordering(459) 00:13:35.004 fused_ordering(460) 00:13:35.004 fused_ordering(461) 00:13:35.004 fused_ordering(462) 00:13:35.004 fused_ordering(463) 00:13:35.004 fused_ordering(464) 00:13:35.004 fused_ordering(465) 00:13:35.004 fused_ordering(466) 00:13:35.004 fused_ordering(467) 00:13:35.004 fused_ordering(468) 00:13:35.004 fused_ordering(469) 00:13:35.004 fused_ordering(470) 00:13:35.004 fused_ordering(471) 00:13:35.004 fused_ordering(472) 00:13:35.004 fused_ordering(473) 00:13:35.004 fused_ordering(474) 00:13:35.004 fused_ordering(475) 00:13:35.004 fused_ordering(476) 00:13:35.004 fused_ordering(477) 00:13:35.004 fused_ordering(478) 00:13:35.004 fused_ordering(479) 00:13:35.004 fused_ordering(480) 00:13:35.004 fused_ordering(481) 00:13:35.004 fused_ordering(482) 00:13:35.004 fused_ordering(483) 00:13:35.004 fused_ordering(484) 00:13:35.004 fused_ordering(485) 00:13:35.004 fused_ordering(486) 00:13:35.004 fused_ordering(487) 00:13:35.004 fused_ordering(488) 00:13:35.004 fused_ordering(489) 00:13:35.004 fused_ordering(490) 00:13:35.004 fused_ordering(491) 00:13:35.004 fused_ordering(492) 00:13:35.004 fused_ordering(493) 00:13:35.004 fused_ordering(494) 00:13:35.004 fused_ordering(495) 00:13:35.004 fused_ordering(496) 00:13:35.004 fused_ordering(497) 00:13:35.004 fused_ordering(498) 00:13:35.004 fused_ordering(499) 00:13:35.004 fused_ordering(500) 00:13:35.004 fused_ordering(501) 00:13:35.004 fused_ordering(502) 00:13:35.004 fused_ordering(503) 00:13:35.004 fused_ordering(504) 00:13:35.004 fused_ordering(505) 00:13:35.004 fused_ordering(506) 00:13:35.004 fused_ordering(507) 00:13:35.004 fused_ordering(508) 00:13:35.004 fused_ordering(509) 00:13:35.004 fused_ordering(510) 00:13:35.004 fused_ordering(511) 00:13:35.004 fused_ordering(512) 00:13:35.004 fused_ordering(513) 00:13:35.004 fused_ordering(514) 00:13:35.004 fused_ordering(515) 00:13:35.004 fused_ordering(516) 00:13:35.004 fused_ordering(517) 00:13:35.004 fused_ordering(518) 00:13:35.004 fused_ordering(519) 00:13:35.004 fused_ordering(520) 00:13:35.004 fused_ordering(521) 00:13:35.004 fused_ordering(522) 00:13:35.004 fused_ordering(523) 00:13:35.004 fused_ordering(524) 00:13:35.004 fused_ordering(525) 00:13:35.004 fused_ordering(526) 00:13:35.004 fused_ordering(527) 00:13:35.004 fused_ordering(528) 00:13:35.004 fused_ordering(529) 00:13:35.004 fused_ordering(530) 00:13:35.004 fused_ordering(531) 00:13:35.004 fused_ordering(532) 00:13:35.004 fused_ordering(533) 00:13:35.004 fused_ordering(534) 00:13:35.004 fused_ordering(535) 00:13:35.004 fused_ordering(536) 00:13:35.004 fused_ordering(537) 00:13:35.004 fused_ordering(538) 00:13:35.004 fused_ordering(539) 00:13:35.004 fused_ordering(540) 00:13:35.004 fused_ordering(541) 00:13:35.004 fused_ordering(542) 00:13:35.004 fused_ordering(543) 00:13:35.004 fused_ordering(544) 00:13:35.004 fused_ordering(545) 00:13:35.004 fused_ordering(546) 00:13:35.004 fused_ordering(547) 00:13:35.004 fused_ordering(548) 00:13:35.004 fused_ordering(549) 00:13:35.004 fused_ordering(550) 00:13:35.004 fused_ordering(551) 00:13:35.004 fused_ordering(552) 00:13:35.004 fused_ordering(553) 00:13:35.004 fused_ordering(554) 00:13:35.004 fused_ordering(555) 00:13:35.004 fused_ordering(556) 00:13:35.004 fused_ordering(557) 00:13:35.004 fused_ordering(558) 00:13:35.004 fused_ordering(559) 00:13:35.004 fused_ordering(560) 00:13:35.004 fused_ordering(561) 00:13:35.004 fused_ordering(562) 00:13:35.004 fused_ordering(563) 00:13:35.004 fused_ordering(564) 00:13:35.004 fused_ordering(565) 00:13:35.004 fused_ordering(566) 00:13:35.004 fused_ordering(567) 00:13:35.004 fused_ordering(568) 00:13:35.004 fused_ordering(569) 00:13:35.004 fused_ordering(570) 00:13:35.004 fused_ordering(571) 00:13:35.004 fused_ordering(572) 00:13:35.004 fused_ordering(573) 00:13:35.004 fused_ordering(574) 00:13:35.004 fused_ordering(575) 00:13:35.004 fused_ordering(576) 00:13:35.004 fused_ordering(577) 00:13:35.004 fused_ordering(578) 00:13:35.004 fused_ordering(579) 00:13:35.004 fused_ordering(580) 00:13:35.004 fused_ordering(581) 00:13:35.004 fused_ordering(582) 00:13:35.004 fused_ordering(583) 00:13:35.004 fused_ordering(584) 00:13:35.004 fused_ordering(585) 00:13:35.004 fused_ordering(586) 00:13:35.004 fused_ordering(587) 00:13:35.004 fused_ordering(588) 00:13:35.004 fused_ordering(589) 00:13:35.004 fused_ordering(590) 00:13:35.004 fused_ordering(591) 00:13:35.004 fused_ordering(592) 00:13:35.004 fused_ordering(593) 00:13:35.004 fused_ordering(594) 00:13:35.004 fused_ordering(595) 00:13:35.004 fused_ordering(596) 00:13:35.004 fused_ordering(597) 00:13:35.004 fused_ordering(598) 00:13:35.004 fused_ordering(599) 00:13:35.004 fused_ordering(600) 00:13:35.004 fused_ordering(601) 00:13:35.004 fused_ordering(602) 00:13:35.004 fused_ordering(603) 00:13:35.004 fused_ordering(604) 00:13:35.004 fused_ordering(605) 00:13:35.004 fused_ordering(606) 00:13:35.004 fused_ordering(607) 00:13:35.004 fused_ordering(608) 00:13:35.004 fused_ordering(609) 00:13:35.004 fused_ordering(610) 00:13:35.004 fused_ordering(611) 00:13:35.004 fused_ordering(612) 00:13:35.004 fused_ordering(613) 00:13:35.004 fused_ordering(614) 00:13:35.004 fused_ordering(615) 00:13:35.951 fused_ordering(616) 00:13:35.951 fused_ordering(617) 00:13:35.951 fused_ordering(618) 00:13:35.951 fused_ordering(619) 00:13:35.951 fused_ordering(620) 00:13:35.951 fused_ordering(621) 00:13:35.951 fused_ordering(622) 00:13:35.951 fused_ordering(623) 00:13:35.951 fused_ordering(624) 00:13:35.951 fused_ordering(625) 00:13:35.951 fused_ordering(626) 00:13:35.951 fused_ordering(627) 00:13:35.951 fused_ordering(628) 00:13:35.951 fused_ordering(629) 00:13:35.951 fused_ordering(630) 00:13:35.951 fused_ordering(631) 00:13:35.951 fused_ordering(632) 00:13:35.951 fused_ordering(633) 00:13:35.951 fused_ordering(634) 00:13:35.951 fused_ordering(635) 00:13:35.952 fused_ordering(636) 00:13:35.952 fused_ordering(637) 00:13:35.952 fused_ordering(638) 00:13:35.952 fused_ordering(639) 00:13:35.952 fused_ordering(640) 00:13:35.952 fused_ordering(641) 00:13:35.952 fused_ordering(642) 00:13:35.952 fused_ordering(643) 00:13:35.952 fused_ordering(644) 00:13:35.952 fused_ordering(645) 00:13:35.952 fused_ordering(646) 00:13:35.952 fused_ordering(647) 00:13:35.952 fused_ordering(648) 00:13:35.952 fused_ordering(649) 00:13:35.952 fused_ordering(650) 00:13:35.952 fused_ordering(651) 00:13:35.952 fused_ordering(652) 00:13:35.952 fused_ordering(653) 00:13:35.952 fused_ordering(654) 00:13:35.952 fused_ordering(655) 00:13:35.952 fused_ordering(656) 00:13:35.952 fused_ordering(657) 00:13:35.952 fused_ordering(658) 00:13:35.952 fused_ordering(659) 00:13:35.952 fused_ordering(660) 00:13:35.952 fused_ordering(661) 00:13:35.952 fused_ordering(662) 00:13:35.952 fused_ordering(663) 00:13:35.952 fused_ordering(664) 00:13:35.952 fused_ordering(665) 00:13:35.952 fused_ordering(666) 00:13:35.952 fused_ordering(667) 00:13:35.952 fused_ordering(668) 00:13:35.952 fused_ordering(669) 00:13:35.952 fused_ordering(670) 00:13:35.952 fused_ordering(671) 00:13:35.952 fused_ordering(672) 00:13:35.952 fused_ordering(673) 00:13:35.952 fused_ordering(674) 00:13:35.952 fused_ordering(675) 00:13:35.952 fused_ordering(676) 00:13:35.952 fused_ordering(677) 00:13:35.952 fused_ordering(678) 00:13:35.952 fused_ordering(679) 00:13:35.952 fused_ordering(680) 00:13:35.952 fused_ordering(681) 00:13:35.952 fused_ordering(682) 00:13:35.952 fused_ordering(683) 00:13:35.952 fused_ordering(684) 00:13:35.952 fused_ordering(685) 00:13:35.952 fused_ordering(686) 00:13:35.952 fused_ordering(687) 00:13:35.952 fused_ordering(688) 00:13:35.952 fused_ordering(689) 00:13:35.952 fused_ordering(690) 00:13:35.952 fused_ordering(691) 00:13:35.952 fused_ordering(692) 00:13:35.952 fused_ordering(693) 00:13:35.952 fused_ordering(694) 00:13:35.952 fused_ordering(695) 00:13:35.952 fused_ordering(696) 00:13:35.952 fused_ordering(697) 00:13:35.952 fused_ordering(698) 00:13:35.952 fused_ordering(699) 00:13:35.952 fused_ordering(700) 00:13:35.952 fused_ordering(701) 00:13:35.952 fused_ordering(702) 00:13:35.952 fused_ordering(703) 00:13:35.952 fused_ordering(704) 00:13:35.952 fused_ordering(705) 00:13:35.952 fused_ordering(706) 00:13:35.952 fused_ordering(707) 00:13:35.952 fused_ordering(708) 00:13:35.952 fused_ordering(709) 00:13:35.952 fused_ordering(710) 00:13:35.952 fused_ordering(711) 00:13:35.952 fused_ordering(712) 00:13:35.952 fused_ordering(713) 00:13:35.952 fused_ordering(714) 00:13:35.952 fused_ordering(715) 00:13:35.952 fused_ordering(716) 00:13:35.952 fused_ordering(717) 00:13:35.952 fused_ordering(718) 00:13:35.952 fused_ordering(719) 00:13:35.952 fused_ordering(720) 00:13:35.952 fused_ordering(721) 00:13:35.952 fused_ordering(722) 00:13:35.952 fused_ordering(723) 00:13:35.952 fused_ordering(724) 00:13:35.952 fused_ordering(725) 00:13:35.952 fused_ordering(726) 00:13:35.952 fused_ordering(727) 00:13:35.952 fused_ordering(728) 00:13:35.952 fused_ordering(729) 00:13:35.952 fused_ordering(730) 00:13:35.952 fused_ordering(731) 00:13:35.952 fused_ordering(732) 00:13:35.952 fused_ordering(733) 00:13:35.952 fused_ordering(734) 00:13:35.952 fused_ordering(735) 00:13:35.952 fused_ordering(736) 00:13:35.952 fused_ordering(737) 00:13:35.952 fused_ordering(738) 00:13:35.952 fused_ordering(739) 00:13:35.952 fused_ordering(740) 00:13:35.952 fused_ordering(741) 00:13:35.952 fused_ordering(742) 00:13:35.952 fused_ordering(743) 00:13:35.952 fused_ordering(744) 00:13:35.952 fused_ordering(745) 00:13:35.952 fused_ordering(746) 00:13:35.952 fused_ordering(747) 00:13:35.952 fused_ordering(748) 00:13:35.952 fused_ordering(749) 00:13:35.952 fused_ordering(750) 00:13:35.952 fused_ordering(751) 00:13:35.952 fused_ordering(752) 00:13:35.952 fused_ordering(753) 00:13:35.952 fused_ordering(754) 00:13:35.952 fused_ordering(755) 00:13:35.952 fused_ordering(756) 00:13:35.952 fused_ordering(757) 00:13:35.952 fused_ordering(758) 00:13:35.952 fused_ordering(759) 00:13:35.952 fused_ordering(760) 00:13:35.952 fused_ordering(761) 00:13:35.952 fused_ordering(762) 00:13:35.952 fused_ordering(763) 00:13:35.952 fused_ordering(764) 00:13:35.952 fused_ordering(765) 00:13:35.952 fused_ordering(766) 00:13:35.952 fused_ordering(767) 00:13:35.952 fused_ordering(768) 00:13:35.952 fused_ordering(769) 00:13:35.952 fused_ordering(770) 00:13:35.952 fused_ordering(771) 00:13:35.952 fused_ordering(772) 00:13:35.952 fused_ordering(773) 00:13:35.952 fused_ordering(774) 00:13:35.952 fused_ordering(775) 00:13:35.952 fused_ordering(776) 00:13:35.952 fused_ordering(777) 00:13:35.952 fused_ordering(778) 00:13:35.952 fused_ordering(779) 00:13:35.952 fused_ordering(780) 00:13:35.952 fused_ordering(781) 00:13:35.952 fused_ordering(782) 00:13:35.952 fused_ordering(783) 00:13:35.952 fused_ordering(784) 00:13:35.952 fused_ordering(785) 00:13:35.952 fused_ordering(786) 00:13:35.952 fused_ordering(787) 00:13:35.952 fused_ordering(788) 00:13:35.952 fused_ordering(789) 00:13:35.952 fused_ordering(790) 00:13:35.952 fused_ordering(791) 00:13:35.952 fused_ordering(792) 00:13:35.952 fused_ordering(793) 00:13:35.952 fused_ordering(794) 00:13:35.952 fused_ordering(795) 00:13:35.952 fused_ordering(796) 00:13:35.952 fused_ordering(797) 00:13:35.952 fused_ordering(798) 00:13:35.952 fused_ordering(799) 00:13:35.952 fused_ordering(800) 00:13:35.952 fused_ordering(801) 00:13:35.952 fused_ordering(802) 00:13:35.952 fused_ordering(803) 00:13:35.952 fused_ordering(804) 00:13:35.952 fused_ordering(805) 00:13:35.952 fused_ordering(806) 00:13:35.952 fused_ordering(807) 00:13:35.952 fused_ordering(808) 00:13:35.952 fused_ordering(809) 00:13:35.952 fused_ordering(810) 00:13:35.952 fused_ordering(811) 00:13:35.952 fused_ordering(812) 00:13:35.952 fused_ordering(813) 00:13:35.952 fused_ordering(814) 00:13:35.952 fused_ordering(815) 00:13:35.952 fused_ordering(816) 00:13:35.952 fused_ordering(817) 00:13:35.952 fused_ordering(818) 00:13:35.952 fused_ordering(819) 00:13:35.952 fused_ordering(820) 00:13:36.892 fused_ordering(821) 00:13:36.892 fused_ordering(822) 00:13:36.892 fused_ordering(823) 00:13:36.892 fused_ordering(824) 00:13:36.892 fused_ordering(825) 00:13:36.892 fused_ordering(826) 00:13:36.892 fused_ordering(827) 00:13:36.892 fused_ordering(828) 00:13:36.892 fused_ordering(829) 00:13:36.892 fused_ordering(830) 00:13:36.892 fused_ordering(831) 00:13:36.892 fused_ordering(832) 00:13:36.892 fused_ordering(833) 00:13:36.892 fused_ordering(834) 00:13:36.892 fused_ordering(835) 00:13:36.892 fused_ordering(836) 00:13:36.892 fused_ordering(837) 00:13:36.892 fused_ordering(838) 00:13:36.892 fused_ordering(839) 00:13:36.892 fused_ordering(840) 00:13:36.892 fused_ordering(841) 00:13:36.892 fused_ordering(842) 00:13:36.892 fused_ordering(843) 00:13:36.892 fused_ordering(844) 00:13:36.892 fused_ordering(845) 00:13:36.892 fused_ordering(846) 00:13:36.892 fused_ordering(847) 00:13:36.892 fused_ordering(848) 00:13:36.892 fused_ordering(849) 00:13:36.892 fused_ordering(850) 00:13:36.892 fused_ordering(851) 00:13:36.892 fused_ordering(852) 00:13:36.892 fused_ordering(853) 00:13:36.892 fused_ordering(854) 00:13:36.892 fused_ordering(855) 00:13:36.892 fused_ordering(856) 00:13:36.892 fused_ordering(857) 00:13:36.892 fused_ordering(858) 00:13:36.892 fused_ordering(859) 00:13:36.892 fused_ordering(860) 00:13:36.892 fused_ordering(861) 00:13:36.892 fused_ordering(862) 00:13:36.892 fused_ordering(863) 00:13:36.892 fused_ordering(864) 00:13:36.892 fused_ordering(865) 00:13:36.892 fused_ordering(866) 00:13:36.892 fused_ordering(867) 00:13:36.892 fused_ordering(868) 00:13:36.892 fused_ordering(869) 00:13:36.892 fused_ordering(870) 00:13:36.892 fused_ordering(871) 00:13:36.892 fused_ordering(872) 00:13:36.892 fused_ordering(873) 00:13:36.892 fused_ordering(874) 00:13:36.892 fused_ordering(875) 00:13:36.892 fused_ordering(876) 00:13:36.892 fused_ordering(877) 00:13:36.892 fused_ordering(878) 00:13:36.892 fused_ordering(879) 00:13:36.892 fused_ordering(880) 00:13:36.892 fused_ordering(881) 00:13:36.892 fused_ordering(882) 00:13:36.892 fused_ordering(883) 00:13:36.892 fused_ordering(884) 00:13:36.892 fused_ordering(885) 00:13:36.892 fused_ordering(886) 00:13:36.892 fused_ordering(887) 00:13:36.892 fused_ordering(888) 00:13:36.892 fused_ordering(889) 00:13:36.892 fused_ordering(890) 00:13:36.892 fused_ordering(891) 00:13:36.892 fused_ordering(892) 00:13:36.892 fused_ordering(893) 00:13:36.892 fused_ordering(894) 00:13:36.892 fused_ordering(895) 00:13:36.892 fused_ordering(896) 00:13:36.892 fused_ordering(897) 00:13:36.892 fused_ordering(898) 00:13:36.892 fused_ordering(899) 00:13:36.892 fused_ordering(900) 00:13:36.892 fused_ordering(901) 00:13:36.892 fused_ordering(902) 00:13:36.892 fused_ordering(903) 00:13:36.892 fused_ordering(904) 00:13:36.892 fused_ordering(905) 00:13:36.892 fused_ordering(906) 00:13:36.892 fused_ordering(907) 00:13:36.892 fused_ordering(908) 00:13:36.892 fused_ordering(909) 00:13:36.892 fused_ordering(910) 00:13:36.892 fused_ordering(911) 00:13:36.892 fused_ordering(912) 00:13:36.892 fused_ordering(913) 00:13:36.892 fused_ordering(914) 00:13:36.892 fused_ordering(915) 00:13:36.892 fused_ordering(916) 00:13:36.892 fused_ordering(917) 00:13:36.892 fused_ordering(918) 00:13:36.892 fused_ordering(919) 00:13:36.892 fused_ordering(920) 00:13:36.892 fused_ordering(921) 00:13:36.892 fused_ordering(922) 00:13:36.892 fused_ordering(923) 00:13:36.892 fused_ordering(924) 00:13:36.892 fused_ordering(925) 00:13:36.892 fused_ordering(926) 00:13:36.892 fused_ordering(927) 00:13:36.892 fused_ordering(928) 00:13:36.892 fused_ordering(929) 00:13:36.892 fused_ordering(930) 00:13:36.892 fused_ordering(931) 00:13:36.892 fused_ordering(932) 00:13:36.892 fused_ordering(933) 00:13:36.892 fused_ordering(934) 00:13:36.892 fused_ordering(935) 00:13:36.892 fused_ordering(936) 00:13:36.892 fused_ordering(937) 00:13:36.892 fused_ordering(938) 00:13:36.892 fused_ordering(939) 00:13:36.892 fused_ordering(940) 00:13:36.892 fused_ordering(941) 00:13:36.892 fused_ordering(942) 00:13:36.892 fused_ordering(943) 00:13:36.892 fused_ordering(944) 00:13:36.892 fused_ordering(945) 00:13:36.892 fused_ordering(946) 00:13:36.892 fused_ordering(947) 00:13:36.892 fused_ordering(948) 00:13:36.892 fused_ordering(949) 00:13:36.892 fused_ordering(950) 00:13:36.892 fused_ordering(951) 00:13:36.892 fused_ordering(952) 00:13:36.892 fused_ordering(953) 00:13:36.892 fused_ordering(954) 00:13:36.892 fused_ordering(955) 00:13:36.892 fused_ordering(956) 00:13:36.892 fused_ordering(957) 00:13:36.892 fused_ordering(958) 00:13:36.892 fused_ordering(959) 00:13:36.892 fused_ordering(960) 00:13:36.892 fused_ordering(961) 00:13:36.892 fused_ordering(962) 00:13:36.892 fused_ordering(963) 00:13:36.892 fused_ordering(964) 00:13:36.892 fused_ordering(965) 00:13:36.892 fused_ordering(966) 00:13:36.892 fused_ordering(967) 00:13:36.892 fused_ordering(968) 00:13:36.892 fused_ordering(969) 00:13:36.892 fused_ordering(970) 00:13:36.892 fused_ordering(971) 00:13:36.892 fused_ordering(972) 00:13:36.892 fused_ordering(973) 00:13:36.892 fused_ordering(974) 00:13:36.892 fused_ordering(975) 00:13:36.892 fused_ordering(976) 00:13:36.892 fused_ordering(977) 00:13:36.892 fused_ordering(978) 00:13:36.892 fused_ordering(979) 00:13:36.892 fused_ordering(980) 00:13:36.892 fused_ordering(981) 00:13:36.892 fused_ordering(982) 00:13:36.892 fused_ordering(983) 00:13:36.892 fused_ordering(984) 00:13:36.892 fused_ordering(985) 00:13:36.892 fused_ordering(986) 00:13:36.892 fused_ordering(987) 00:13:36.892 fused_ordering(988) 00:13:36.892 fused_ordering(989) 00:13:36.892 fused_ordering(990) 00:13:36.892 fused_ordering(991) 00:13:36.892 fused_ordering(992) 00:13:36.892 fused_ordering(993) 00:13:36.892 fused_ordering(994) 00:13:36.892 fused_ordering(995) 00:13:36.892 fused_ordering(996) 00:13:36.892 fused_ordering(997) 00:13:36.892 fused_ordering(998) 00:13:36.892 fused_ordering(999) 00:13:36.892 fused_ordering(1000) 00:13:36.892 fused_ordering(1001) 00:13:36.892 fused_ordering(1002) 00:13:36.892 fused_ordering(1003) 00:13:36.892 fused_ordering(1004) 00:13:36.892 fused_ordering(1005) 00:13:36.892 fused_ordering(1006) 00:13:36.892 fused_ordering(1007) 00:13:36.892 fused_ordering(1008) 00:13:36.892 fused_ordering(1009) 00:13:36.892 fused_ordering(1010) 00:13:36.892 fused_ordering(1011) 00:13:36.892 fused_ordering(1012) 00:13:36.892 fused_ordering(1013) 00:13:36.892 fused_ordering(1014) 00:13:36.892 fused_ordering(1015) 00:13:36.892 fused_ordering(1016) 00:13:36.892 fused_ordering(1017) 00:13:36.892 fused_ordering(1018) 00:13:36.892 fused_ordering(1019) 00:13:36.892 fused_ordering(1020) 00:13:36.892 fused_ordering(1021) 00:13:36.892 fused_ordering(1022) 00:13:36.892 fused_ordering(1023) 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.892 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.892 rmmod nvme_tcp 00:13:36.892 rmmod nvme_fabrics 00:13:36.892 rmmod nvme_keyring 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2924075 ']' 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2924075 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2924075 ']' 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2924075 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.893 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2924075 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2924075' 00:13:37.153 killing process with pid 2924075 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2924075 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2924075 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.153 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.696 00:13:39.696 real 0m14.267s 00:13:39.696 user 0m9.855s 00:13:39.696 sys 0m8.038s 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.696 ************************************ 00:13:39.696 END TEST nvmf_fused_ordering 00:13:39.696 ************************************ 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.696 ************************************ 00:13:39.696 START TEST nvmf_ns_masking 00:13:39.696 ************************************ 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:39.696 * Looking for test storage... 00:13:39.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.696 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6e014ef8-8dfc-4f29-a4fa-2176a97ea104 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=579111c1-60a8-4ab4-ab41-e31b8b9a7b72 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dbc20aed-73c1-4619-96bd-d7bcbc0984ae 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.697 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.981 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:44.982 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:44.982 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:44.982 Found net devices under 0000:86:00.0: cvl_0_0 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:44.982 Found net devices under 0000:86:00.1: cvl_0_1 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:13:44.982 00:13:44.982 --- 10.0.0.2 ping statistics --- 00:13:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.982 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:13:44.982 00:13:44.982 --- 10.0.0.1 ping statistics --- 00:13:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.982 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.982 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2928540 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2928540 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2928540 ']' 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.982 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.983 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:44.983 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:44.983 [2024-07-26 13:55:12.070149] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:13:44.983 [2024-07-26 13:55:12.070194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.983 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.983 [2024-07-26 13:55:12.125911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.983 [2024-07-26 13:55:12.204546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.983 [2024-07-26 13:55:12.204581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.983 [2024-07-26 13:55:12.204588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.983 [2024-07-26 13:55:12.204594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.983 [2024-07-26 13:55:12.204599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.983 [2024-07-26 13:55:12.204619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.552 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:45.812 [2024-07-26 13:55:13.060020] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.812 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:45.812 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:45.812 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:46.071 Malloc1 00:13:46.072 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:46.072 Malloc2 00:13:46.072 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:46.330 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:46.590 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.590 [2024-07-26 13:55:13.929017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.590 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:46.590 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dbc20aed-73c1-4619-96bd-d7bcbc0984ae -a 10.0.0.2 -s 4420 -i 4 00:13:46.849 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.849 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.849 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.849 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:46.849 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:48.758 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.017 [ 0]:0x1 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01deac67c214449b8c4e0fbf16dda512 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01deac67c214449b8c4e0fbf16dda512 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.017 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.277 [ 0]:0x1 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01deac67c214449b8c4e0fbf16dda512 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01deac67c214449b8c4e0fbf16dda512 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.277 [ 1]:0x2 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.277 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.537 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:49.797 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:49.797 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dbc20aed-73c1-4619-96bd-d7bcbc0984ae -a 10.0.0.2 -s 4420 -i 4 00:13:49.797 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:49.797 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:49.797 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.797 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:49.798 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:49.798 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:52.339 [ 0]:0x2 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:52.339 [ 0]:0x1 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01deac67c214449b8c4e0fbf16dda512 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01deac67c214449b8c4e0fbf16dda512 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:52.339 [ 1]:0x2 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.339 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:52.600 [ 0]:0x2 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:52.600 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.600 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.860 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:52.860 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dbc20aed-73c1-4619-96bd-d7bcbc0984ae -a 10.0.0.2 -s 4420 -i 4 00:13:53.120 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:53.120 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:53.120 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.120 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:53.120 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:53.120 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.063 [ 0]:0x1 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01deac67c214449b8c4e0fbf16dda512 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01deac67c214449b8c4e0fbf16dda512 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.063 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.063 [ 1]:0x2 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.323 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.583 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:55.583 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.583 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:55.583 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.583 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.583 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.584 [ 0]:0x2 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:55.584 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:55.844 [2024-07-26 13:55:23.022900] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:55.844 request: 00:13:55.844 { 00:13:55.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.844 "nsid": 2, 00:13:55.844 "host": "nqn.2016-06.io.spdk:host1", 00:13:55.844 "method": "nvmf_ns_remove_host", 00:13:55.844 "req_id": 1 00:13:55.844 } 00:13:55.844 Got JSON-RPC error response 00:13:55.844 response: 00:13:55.844 { 00:13:55.844 "code": -32602, 00:13:55.844 "message": "Invalid parameters" 00:13:55.844 } 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.844 [ 0]:0x2 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3840b5158fcb407fb5c66d3b1443332d 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3840b5158fcb407fb5c66d3b1443332d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:55.844 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2930542 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2930542 /var/tmp/host.sock 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2930542 ']' 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:56.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.104 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:56.104 [2024-07-26 13:55:23.367760] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:13:56.104 [2024-07-26 13:55:23.367803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930542 ] 00:13:56.104 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.104 [2024-07-26 13:55:23.419969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.104 [2024-07-26 13:55:23.494007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.043 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.043 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:57.043 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.043 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.303 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6e014ef8-8dfc-4f29-a4fa-2176a97ea104 00:13:57.304 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:57.304 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6E014EF88DFC4F29A4FA2176A97EA104 -i 00:13:57.304 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 579111c1-60a8-4ab4-ab41-e31b8b9a7b72 00:13:57.304 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:57.304 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 579111C160A84AB4AB41E31B8B9A7B72 -i 00:13:57.563 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:57.823 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:57.823 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:57.823 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:58.084 nvme0n1 00:13:58.084 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:58.084 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:58.654 nvme1n2 00:13:58.654 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:58.654 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:58.654 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:58.654 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:58.654 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:58.654 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:58.654 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:58.654 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:58.654 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:58.914 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6e014ef8-8dfc-4f29-a4fa-2176a97ea104 == \6\e\0\1\4\e\f\8\-\8\d\f\c\-\4\f\2\9\-\a\4\f\a\-\2\1\7\6\a\9\7\e\a\1\0\4 ]] 00:13:58.914 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:58.914 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:58.914 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 579111c1-60a8-4ab4-ab41-e31b8b9a7b72 == \5\7\9\1\1\1\c\1\-\6\0\a\8\-\4\a\b\4\-\a\b\4\1\-\e\3\1\b\8\b\9\a\7\b\7\2 ]] 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2930542 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2930542 ']' 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2930542 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2930542 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2930542' 00:13:59.174 killing process with pid 2930542 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2930542 00:13:59.174 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2930542 00:13:59.434 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.695 rmmod nvme_tcp 00:13:59.695 rmmod nvme_fabrics 00:13:59.695 rmmod nvme_keyring 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2928540 ']' 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2928540 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2928540 ']' 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2928540 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.695 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2928540 00:13:59.695 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.695 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.695 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2928540' 00:13:59.695 killing process with pid 2928540 00:13:59.695 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2928540 00:13:59.695 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2928540 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.955 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.865 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.125 00:14:02.125 real 0m22.657s 00:14:02.125 user 0m24.291s 00:14:02.125 sys 0m5.958s 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.125 ************************************ 00:14:02.125 END TEST nvmf_ns_masking 00:14:02.125 ************************************ 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.125 ************************************ 00:14:02.125 START TEST nvmf_nvme_cli 00:14:02.125 ************************************ 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:02.125 * Looking for test storage... 00:14:02.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:02.125 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.126 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.708 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.708 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.708 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.708 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.708 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:08.709 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:08.709 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:08.709 Found net devices under 0000:86:00.0: cvl_0_0 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:08.709 Found net devices under 0000:86:00.1: cvl_0_1 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.709 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:14:08.709 00:14:08.709 --- 10.0.0.2 ping statistics --- 00:14:08.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.709 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:14:08.709 00:14:08.709 --- 10.0.0.1 ping statistics --- 00:14:08.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.709 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.709 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2934776 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2934776 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2934776 ']' 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.710 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.710 [2024-07-26 13:55:35.299630] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:14:08.710 [2024-07-26 13:55:35.299678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.710 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.710 [2024-07-26 13:55:35.360472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.710 [2024-07-26 13:55:35.435801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.710 [2024-07-26 13:55:35.435841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.710 [2024-07-26 13:55:35.435848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.710 [2024-07-26 13:55:35.435853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.710 [2024-07-26 13:55:35.435858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.710 [2024-07-26 13:55:35.435950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.710 [2024-07-26 13:55:35.436059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.710 [2024-07-26 13:55:35.436133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.710 [2024-07-26 13:55:35.436136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.710 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 [2024-07-26 13:55:36.150513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 Malloc0 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 Malloc1 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 [2024-07-26 13:55:36.231652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:08.971 00:14:08.971 Discovery Log Number of Records 2, Generation counter 2 00:14:08.971 =====Discovery Log Entry 0====== 00:14:08.971 trtype: tcp 00:14:08.971 adrfam: ipv4 00:14:08.971 subtype: current discovery subsystem 00:14:08.971 treq: not required 00:14:08.971 portid: 0 00:14:08.971 trsvcid: 4420 00:14:08.971 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:08.971 traddr: 10.0.0.2 00:14:08.971 eflags: explicit discovery connections, duplicate discovery information 00:14:08.971 sectype: none 00:14:08.971 =====Discovery Log Entry 1====== 00:14:08.971 trtype: tcp 00:14:08.971 adrfam: ipv4 00:14:08.971 subtype: nvme subsystem 00:14:08.971 treq: not required 00:14:08.971 portid: 0 00:14:08.971 trsvcid: 4420 00:14:08.971 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:08.971 traddr: 10.0.0.2 00:14:08.971 eflags: none 00:14:08.971 sectype: none 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:08.971 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.354 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:10.354 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.354 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.354 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:10.354 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:10.354 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:12.265 /dev/nvme0n1 ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.265 rmmod nvme_tcp 00:14:12.265 rmmod nvme_fabrics 00:14:12.265 rmmod nvme_keyring 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2934776 ']' 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2934776 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2934776 ']' 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2934776 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.265 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2934776 00:14:12.525 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.525 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.525 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2934776' 00:14:12.525 killing process with pid 2934776 00:14:12.525 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2934776 00:14:12.525 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2934776 00:14:12.525 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.526 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.072 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.072 00:14:15.072 real 0m12.655s 00:14:15.072 user 0m19.594s 00:14:15.072 sys 0m4.913s 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.073 ************************************ 00:14:15.073 END TEST nvmf_nvme_cli 00:14:15.073 ************************************ 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.073 ************************************ 00:14:15.073 START TEST nvmf_vfio_user 00:14:15.073 ************************************ 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:15.073 * Looking for test storage... 00:14:15.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2936061 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2936061' 00:14:15.073 Process pid: 2936061 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2936061 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2936061 ']' 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.073 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:15.073 [2024-07-26 13:55:42.265773] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:14:15.073 [2024-07-26 13:55:42.265824] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.073 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.073 [2024-07-26 13:55:42.321654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.073 [2024-07-26 13:55:42.400531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.073 [2024-07-26 13:55:42.400571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.073 [2024-07-26 13:55:42.400579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.073 [2024-07-26 13:55:42.400585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.073 [2024-07-26 13:55:42.400590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.073 [2024-07-26 13:55:42.400631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.073 [2024-07-26 13:55:42.400727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.073 [2024-07-26 13:55:42.400794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.073 [2024-07-26 13:55:42.400795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.651 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.651 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:15.651 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:17.031 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:17.031 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:17.031 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:17.031 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.031 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:17.031 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.031 Malloc1 00:14:17.289 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:17.289 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:17.548 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:17.808 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.808 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:17.808 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.808 Malloc2 00:14:17.808 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:18.068 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:18.328 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:18.590 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:18.590 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:18.590 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.590 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:18.590 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:18.590 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:18.590 [2024-07-26 13:55:45.787581] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:14:18.590 [2024-07-26 13:55:45.787610] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936549 ] 00:14:18.590 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.590 [2024-07-26 13:55:45.815558] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:18.590 [2024-07-26 13:55:45.818053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:18.590 [2024-07-26 13:55:45.818072] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f045dc1a000 00:14:18.590 [2024-07-26 13:55:45.819050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.820055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.821058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.822061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.823068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.824071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.825072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.826071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.590 [2024-07-26 13:55:45.827086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:18.590 [2024-07-26 13:55:45.827097] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f045dc0f000 00:14:18.590 [2024-07-26 13:55:45.828034] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:18.590 [2024-07-26 13:55:45.837646] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:18.590 [2024-07-26 13:55:45.837671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:18.590 [2024-07-26 13:55:45.842172] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:18.590 [2024-07-26 13:55:45.842210] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:18.590 [2024-07-26 13:55:45.842287] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:18.590 [2024-07-26 13:55:45.842302] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:18.590 [2024-07-26 13:55:45.842307] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:18.590 [2024-07-26 13:55:45.843173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:18.590 [2024-07-26 13:55:45.843184] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:18.590 [2024-07-26 13:55:45.843190] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:18.590 [2024-07-26 13:55:45.844177] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:18.590 [2024-07-26 13:55:45.844185] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:18.590 [2024-07-26 13:55:45.844191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:18.590 [2024-07-26 13:55:45.845183] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:18.590 [2024-07-26 13:55:45.845191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:18.590 [2024-07-26 13:55:45.846187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:18.590 [2024-07-26 13:55:45.846195] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:18.590 [2024-07-26 13:55:45.846199] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:18.590 [2024-07-26 13:55:45.846205] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:18.591 [2024-07-26 13:55:45.846310] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:18.591 [2024-07-26 13:55:45.846315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:18.591 [2024-07-26 13:55:45.846319] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:18.591 [2024-07-26 13:55:45.847191] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:18.591 [2024-07-26 13:55:45.848198] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:18.591 [2024-07-26 13:55:45.853052] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:18.591 [2024-07-26 13:55:45.853212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:18.591 [2024-07-26 13:55:45.853289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:18.591 [2024-07-26 13:55:45.854223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:18.591 [2024-07-26 13:55:45.854231] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:18.591 [2024-07-26 13:55:45.854235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854252] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:18.591 [2024-07-26 13:55:45.854260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854273] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.591 [2024-07-26 13:55:45.854277] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.591 [2024-07-26 13:55:45.854281] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.591 [2024-07-26 13:55:45.854293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854348] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:18.591 [2024-07-26 13:55:45.854352] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:18.591 [2024-07-26 13:55:45.854356] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:18.591 [2024-07-26 13:55:45.854360] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:18.591 [2024-07-26 13:55:45.854365] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:18.591 [2024-07-26 13:55:45.854368] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:18.591 [2024-07-26 13:55:45.854372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.591 [2024-07-26 13:55:45.854424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.591 [2024-07-26 13:55:45.854433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.591 [2024-07-26 13:55:45.854441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.591 [2024-07-26 13:55:45.854445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854474] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:18.591 [2024-07-26 13:55:45.854479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854570] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854576] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:18.591 [2024-07-26 13:55:45.854580] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:18.591 [2024-07-26 13:55:45.854583] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.591 [2024-07-26 13:55:45.854588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854608] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:18.591 [2024-07-26 13:55:45.854618] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854630] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.591 [2024-07-26 13:55:45.854634] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.591 [2024-07-26 13:55:45.854637] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.591 [2024-07-26 13:55:45.854642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854676] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854689] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.591 [2024-07-26 13:55:45.854693] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.591 [2024-07-26 13:55:45.854696] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.591 [2024-07-26 13:55:45.854701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854723] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854738] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854747] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854751] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:18.591 [2024-07-26 13:55:45.854755] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:18.591 [2024-07-26 13:55:45.854760] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:18.591 [2024-07-26 13:55:45.854774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:18.591 [2024-07-26 13:55:45.854796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:18.591 [2024-07-26 13:55:45.854802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:18.592 [2024-07-26 13:55:45.854811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:18.592 [2024-07-26 13:55:45.854819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:18.592 [2024-07-26 13:55:45.854829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:18.592 [2024-07-26 13:55:45.854839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:18.592 [2024-07-26 13:55:45.854851] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:18.592 [2024-07-26 13:55:45.854856] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:18.592 [2024-07-26 13:55:45.854859] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:18.592 [2024-07-26 13:55:45.854862] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:18.592 [2024-07-26 13:55:45.854865] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:18.592 [2024-07-26 13:55:45.854870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:18.592 [2024-07-26 13:55:45.854877] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:18.592 [2024-07-26 13:55:45.854880] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:18.592 [2024-07-26 13:55:45.854883] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.592 [2024-07-26 13:55:45.854889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:18.592 [2024-07-26 13:55:45.854895] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:18.592 [2024-07-26 13:55:45.854899] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.592 [2024-07-26 13:55:45.854902] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.592 [2024-07-26 13:55:45.854907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.592 [2024-07-26 13:55:45.854913] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:18.592 [2024-07-26 13:55:45.854917] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:18.592 [2024-07-26 13:55:45.854920] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.592 [2024-07-26 13:55:45.854925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:18.592 [2024-07-26 13:55:45.854931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:18.592 [2024-07-26 13:55:45.854943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:18.592 [2024-07-26 13:55:45.854953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:18.592 [2024-07-26 13:55:45.854959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:18.592 ===================================================== 00:14:18.592 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:18.592 ===================================================== 00:14:18.592 Controller Capabilities/Features 00:14:18.592 ================================ 00:14:18.592 Vendor ID: 4e58 00:14:18.592 Subsystem Vendor ID: 4e58 00:14:18.592 Serial Number: SPDK1 00:14:18.592 Model Number: SPDK bdev Controller 00:14:18.592 Firmware Version: 24.09 00:14:18.592 Recommended Arb Burst: 6 00:14:18.592 IEEE OUI Identifier: 8d 6b 50 00:14:18.592 Multi-path I/O 00:14:18.592 May have multiple subsystem ports: Yes 00:14:18.592 May have multiple controllers: Yes 00:14:18.592 Associated with SR-IOV VF: No 00:14:18.592 Max Data Transfer Size: 131072 00:14:18.592 Max Number of Namespaces: 32 00:14:18.592 Max Number of I/O Queues: 127 00:14:18.592 NVMe Specification Version (VS): 1.3 00:14:18.592 NVMe Specification Version (Identify): 1.3 00:14:18.592 Maximum Queue Entries: 256 00:14:18.592 Contiguous Queues Required: Yes 00:14:18.592 Arbitration Mechanisms Supported 00:14:18.592 Weighted Round Robin: Not Supported 00:14:18.592 Vendor Specific: Not Supported 00:14:18.592 Reset Timeout: 15000 ms 00:14:18.592 Doorbell Stride: 4 bytes 00:14:18.592 NVM Subsystem Reset: Not Supported 00:14:18.592 Command Sets Supported 00:14:18.592 NVM Command Set: Supported 00:14:18.592 Boot Partition: Not Supported 00:14:18.592 Memory Page Size Minimum: 4096 bytes 00:14:18.592 Memory Page Size Maximum: 4096 bytes 00:14:18.592 Persistent Memory Region: Not Supported 00:14:18.592 Optional Asynchronous Events Supported 00:14:18.592 Namespace Attribute Notices: Supported 00:14:18.592 Firmware Activation Notices: Not Supported 00:14:18.592 ANA Change Notices: Not Supported 00:14:18.592 PLE Aggregate Log Change Notices: Not Supported 00:14:18.592 LBA Status Info Alert Notices: Not Supported 00:14:18.592 EGE Aggregate Log Change Notices: Not Supported 00:14:18.592 Normal NVM Subsystem Shutdown event: Not Supported 00:14:18.592 Zone Descriptor Change Notices: Not Supported 00:14:18.592 Discovery Log Change Notices: Not Supported 00:14:18.592 Controller Attributes 00:14:18.592 128-bit Host Identifier: Supported 00:14:18.592 Non-Operational Permissive Mode: Not Supported 00:14:18.592 NVM Sets: Not Supported 00:14:18.592 Read Recovery Levels: Not Supported 00:14:18.592 Endurance Groups: Not Supported 00:14:18.592 Predictable Latency Mode: Not Supported 00:14:18.592 Traffic Based Keep ALive: Not Supported 00:14:18.592 Namespace Granularity: Not Supported 00:14:18.592 SQ Associations: Not Supported 00:14:18.592 UUID List: Not Supported 00:14:18.592 Multi-Domain Subsystem: Not Supported 00:14:18.592 Fixed Capacity Management: Not Supported 00:14:18.592 Variable Capacity Management: Not Supported 00:14:18.592 Delete Endurance Group: Not Supported 00:14:18.592 Delete NVM Set: Not Supported 00:14:18.592 Extended LBA Formats Supported: Not Supported 00:14:18.592 Flexible Data Placement Supported: Not Supported 00:14:18.592 00:14:18.592 Controller Memory Buffer Support 00:14:18.592 ================================ 00:14:18.592 Supported: No 00:14:18.592 00:14:18.592 Persistent Memory Region Support 00:14:18.592 ================================ 00:14:18.592 Supported: No 00:14:18.592 00:14:18.592 Admin Command Set Attributes 00:14:18.592 ============================ 00:14:18.592 Security Send/Receive: Not Supported 00:14:18.592 Format NVM: Not Supported 00:14:18.592 Firmware Activate/Download: Not Supported 00:14:18.592 Namespace Management: Not Supported 00:14:18.592 Device Self-Test: Not Supported 00:14:18.592 Directives: Not Supported 00:14:18.592 NVMe-MI: Not Supported 00:14:18.592 Virtualization Management: Not Supported 00:14:18.592 Doorbell Buffer Config: Not Supported 00:14:18.592 Get LBA Status Capability: Not Supported 00:14:18.592 Command & Feature Lockdown Capability: Not Supported 00:14:18.592 Abort Command Limit: 4 00:14:18.592 Async Event Request Limit: 4 00:14:18.592 Number of Firmware Slots: N/A 00:14:18.592 Firmware Slot 1 Read-Only: N/A 00:14:18.592 Firmware Activation Without Reset: N/A 00:14:18.592 Multiple Update Detection Support: N/A 00:14:18.592 Firmware Update Granularity: No Information Provided 00:14:18.592 Per-Namespace SMART Log: No 00:14:18.592 Asymmetric Namespace Access Log Page: Not Supported 00:14:18.592 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:18.592 Command Effects Log Page: Supported 00:14:18.592 Get Log Page Extended Data: Supported 00:14:18.592 Telemetry Log Pages: Not Supported 00:14:18.592 Persistent Event Log Pages: Not Supported 00:14:18.592 Supported Log Pages Log Page: May Support 00:14:18.592 Commands Supported & Effects Log Page: Not Supported 00:14:18.592 Feature Identifiers & Effects Log Page:May Support 00:14:18.592 NVMe-MI Commands & Effects Log Page: May Support 00:14:18.592 Data Area 4 for Telemetry Log: Not Supported 00:14:18.592 Error Log Page Entries Supported: 128 00:14:18.592 Keep Alive: Supported 00:14:18.592 Keep Alive Granularity: 10000 ms 00:14:18.592 00:14:18.592 NVM Command Set Attributes 00:14:18.592 ========================== 00:14:18.592 Submission Queue Entry Size 00:14:18.592 Max: 64 00:14:18.592 Min: 64 00:14:18.592 Completion Queue Entry Size 00:14:18.592 Max: 16 00:14:18.592 Min: 16 00:14:18.592 Number of Namespaces: 32 00:14:18.592 Compare Command: Supported 00:14:18.592 Write Uncorrectable Command: Not Supported 00:14:18.592 Dataset Management Command: Supported 00:14:18.592 Write Zeroes Command: Supported 00:14:18.592 Set Features Save Field: Not Supported 00:14:18.592 Reservations: Not Supported 00:14:18.592 Timestamp: Not Supported 00:14:18.592 Copy: Supported 00:14:18.592 Volatile Write Cache: Present 00:14:18.592 Atomic Write Unit (Normal): 1 00:14:18.592 Atomic Write Unit (PFail): 1 00:14:18.592 Atomic Compare & Write Unit: 1 00:14:18.592 Fused Compare & Write: Supported 00:14:18.592 Scatter-Gather List 00:14:18.592 SGL Command Set: Supported (Dword aligned) 00:14:18.592 SGL Keyed: Not Supported 00:14:18.592 SGL Bit Bucket Descriptor: Not Supported 00:14:18.593 SGL Metadata Pointer: Not Supported 00:14:18.593 Oversized SGL: Not Supported 00:14:18.593 SGL Metadata Address: Not Supported 00:14:18.593 SGL Offset: Not Supported 00:14:18.593 Transport SGL Data Block: Not Supported 00:14:18.593 Replay Protected Memory Block: Not Supported 00:14:18.593 00:14:18.593 Firmware Slot Information 00:14:18.593 ========================= 00:14:18.593 Active slot: 1 00:14:18.593 Slot 1 Firmware Revision: 24.09 00:14:18.593 00:14:18.593 00:14:18.593 Commands Supported and Effects 00:14:18.593 ============================== 00:14:18.593 Admin Commands 00:14:18.593 -------------- 00:14:18.593 Get Log Page (02h): Supported 00:14:18.593 Identify (06h): Supported 00:14:18.593 Abort (08h): Supported 00:14:18.593 Set Features (09h): Supported 00:14:18.593 Get Features (0Ah): Supported 00:14:18.593 Asynchronous Event Request (0Ch): Supported 00:14:18.593 Keep Alive (18h): Supported 00:14:18.593 I/O Commands 00:14:18.593 ------------ 00:14:18.593 Flush (00h): Supported LBA-Change 00:14:18.593 Write (01h): Supported LBA-Change 00:14:18.593 Read (02h): Supported 00:14:18.593 Compare (05h): Supported 00:14:18.593 Write Zeroes (08h): Supported LBA-Change 00:14:18.593 Dataset Management (09h): Supported LBA-Change 00:14:18.593 Copy (19h): Supported LBA-Change 00:14:18.593 00:14:18.593 Error Log 00:14:18.593 ========= 00:14:18.593 00:14:18.593 Arbitration 00:14:18.593 =========== 00:14:18.593 Arbitration Burst: 1 00:14:18.593 00:14:18.593 Power Management 00:14:18.593 ================ 00:14:18.593 Number of Power States: 1 00:14:18.593 Current Power State: Power State #0 00:14:18.593 Power State #0: 00:14:18.593 Max Power: 0.00 W 00:14:18.593 Non-Operational State: Operational 00:14:18.593 Entry Latency: Not Reported 00:14:18.593 Exit Latency: Not Reported 00:14:18.593 Relative Read Throughput: 0 00:14:18.593 Relative Read Latency: 0 00:14:18.593 Relative Write Throughput: 0 00:14:18.593 Relative Write Latency: 0 00:14:18.593 Idle Power: Not Reported 00:14:18.593 Active Power: Not Reported 00:14:18.593 Non-Operational Permissive Mode: Not Supported 00:14:18.593 00:14:18.593 Health Information 00:14:18.593 ================== 00:14:18.593 Critical Warnings: 00:14:18.593 Available Spare Space: OK 00:14:18.593 Temperature: OK 00:14:18.593 Device Reliability: OK 00:14:18.593 Read Only: No 00:14:18.593 Volatile Memory Backup: OK 00:14:18.593 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:18.593 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:18.593 Available Spare: 0% 00:14:18.593 Available Sp[2024-07-26 13:55:45.855049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:18.593 [2024-07-26 13:55:45.855057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:18.593 [2024-07-26 13:55:45.855080] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:18.593 [2024-07-26 13:55:45.855088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.593 [2024-07-26 13:55:45.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.593 [2024-07-26 13:55:45.855099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.593 [2024-07-26 13:55:45.855105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.593 [2024-07-26 13:55:45.855236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:18.593 [2024-07-26 13:55:45.855245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:18.593 [2024-07-26 13:55:45.856235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.593 [2024-07-26 13:55:45.856284] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:18.593 [2024-07-26 13:55:45.856290] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:18.593 [2024-07-26 13:55:45.857247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:18.593 [2024-07-26 13:55:45.857256] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:18.593 [2024-07-26 13:55:45.857305] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:18.593 [2024-07-26 13:55:45.859278] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:18.593 are Threshold: 0% 00:14:18.593 Life Percentage Used: 0% 00:14:18.593 Data Units Read: 0 00:14:18.593 Data Units Written: 0 00:14:18.593 Host Read Commands: 0 00:14:18.593 Host Write Commands: 0 00:14:18.593 Controller Busy Time: 0 minutes 00:14:18.593 Power Cycles: 0 00:14:18.593 Power On Hours: 0 hours 00:14:18.593 Unsafe Shutdowns: 0 00:14:18.593 Unrecoverable Media Errors: 0 00:14:18.593 Lifetime Error Log Entries: 0 00:14:18.593 Warning Temperature Time: 0 minutes 00:14:18.593 Critical Temperature Time: 0 minutes 00:14:18.593 00:14:18.593 Number of Queues 00:14:18.593 ================ 00:14:18.593 Number of I/O Submission Queues: 127 00:14:18.593 Number of I/O Completion Queues: 127 00:14:18.593 00:14:18.593 Active Namespaces 00:14:18.593 ================= 00:14:18.593 Namespace ID:1 00:14:18.593 Error Recovery Timeout: Unlimited 00:14:18.593 Command Set Identifier: NVM (00h) 00:14:18.593 Deallocate: Supported 00:14:18.593 Deallocated/Unwritten Error: Not Supported 00:14:18.593 Deallocated Read Value: Unknown 00:14:18.593 Deallocate in Write Zeroes: Not Supported 00:14:18.593 Deallocated Guard Field: 0xFFFF 00:14:18.593 Flush: Supported 00:14:18.593 Reservation: Supported 00:14:18.593 Namespace Sharing Capabilities: Multiple Controllers 00:14:18.593 Size (in LBAs): 131072 (0GiB) 00:14:18.593 Capacity (in LBAs): 131072 (0GiB) 00:14:18.593 Utilization (in LBAs): 131072 (0GiB) 00:14:18.593 NGUID: 04A83ACB452144CFBB49EA4E454F8730 00:14:18.593 UUID: 04a83acb-4521-44cf-bb49-ea4e454f8730 00:14:18.593 Thin Provisioning: Not Supported 00:14:18.593 Per-NS Atomic Units: Yes 00:14:18.593 Atomic Boundary Size (Normal): 0 00:14:18.593 Atomic Boundary Size (PFail): 0 00:14:18.593 Atomic Boundary Offset: 0 00:14:18.593 Maximum Single Source Range Length: 65535 00:14:18.593 Maximum Copy Length: 65535 00:14:18.593 Maximum Source Range Count: 1 00:14:18.593 NGUID/EUI64 Never Reused: No 00:14:18.593 Namespace Write Protected: No 00:14:18.593 Number of LBA Formats: 1 00:14:18.593 Current LBA Format: LBA Format #00 00:14:18.593 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:18.593 00:14:18.593 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:18.593 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.853 [2024-07-26 13:55:46.073840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.137 Initializing NVMe Controllers 00:14:24.137 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:24.137 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:24.137 Initialization complete. Launching workers. 00:14:24.137 ======================================================== 00:14:24.137 Latency(us) 00:14:24.137 Device Information : IOPS MiB/s Average min max 00:14:24.137 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39909.20 155.90 3207.42 964.14 10105.21 00:14:24.137 ======================================================== 00:14:24.137 Total : 39909.20 155.90 3207.42 964.14 10105.21 00:14:24.137 00:14:24.137 [2024-07-26 13:55:51.098241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.137 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:24.137 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.137 [2024-07-26 13:55:51.320235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.424 Initializing NVMe Controllers 00:14:29.424 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.424 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:29.424 Initialization complete. Launching workers. 00:14:29.424 ======================================================== 00:14:29.424 Latency(us) 00:14:29.424 Device Information : IOPS MiB/s Average min max 00:14:29.424 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.48 62.67 7978.13 6980.18 8970.05 00:14:29.424 ======================================================== 00:14:29.424 Total : 16042.48 62.67 7978.13 6980.18 8970.05 00:14:29.424 00:14:29.424 [2024-07-26 13:55:56.353800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.424 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:29.424 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.424 [2024-07-26 13:55:56.543755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.708 [2024-07-26 13:56:01.612304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.708 Initializing NVMe Controllers 00:14:34.708 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.708 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.708 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:34.708 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:34.708 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:34.708 Initialization complete. Launching workers. 00:14:34.708 Starting thread on core 2 00:14:34.708 Starting thread on core 3 00:14:34.708 Starting thread on core 1 00:14:34.708 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:34.708 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.708 [2024-07-26 13:56:01.899453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.005 [2024-07-26 13:56:04.966060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.005 Initializing NVMe Controllers 00:14:38.005 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.005 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.005 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:38.005 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:38.005 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:38.005 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:38.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:38.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:38.005 Initialization complete. Launching workers. 00:14:38.005 Starting thread on core 1 with urgent priority queue 00:14:38.005 Starting thread on core 2 with urgent priority queue 00:14:38.005 Starting thread on core 3 with urgent priority queue 00:14:38.005 Starting thread on core 0 with urgent priority queue 00:14:38.005 SPDK bdev Controller (SPDK1 ) core 0: 8479.33 IO/s 11.79 secs/100000 ios 00:14:38.005 SPDK bdev Controller (SPDK1 ) core 1: 7852.33 IO/s 12.74 secs/100000 ios 00:14:38.005 SPDK bdev Controller (SPDK1 ) core 2: 8970.33 IO/s 11.15 secs/100000 ios 00:14:38.005 SPDK bdev Controller (SPDK1 ) core 3: 9383.33 IO/s 10.66 secs/100000 ios 00:14:38.005 ======================================================== 00:14:38.005 00:14:38.005 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:38.005 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.005 [2024-07-26 13:56:05.238571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.005 Initializing NVMe Controllers 00:14:38.005 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.005 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.005 Namespace ID: 1 size: 0GB 00:14:38.005 Initialization complete. 00:14:38.005 INFO: using host memory buffer for IO 00:14:38.005 Hello world! 00:14:38.005 [2024-07-26 13:56:05.271779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.005 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:38.005 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.265 [2024-07-26 13:56:05.535423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.205 Initializing NVMe Controllers 00:14:39.205 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:39.205 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:39.205 Initialization complete. Launching workers. 00:14:39.205 submit (in ns) avg, min, max = 6148.3, 3220.0, 3999713.9 00:14:39.205 complete (in ns) avg, min, max = 21384.4, 1757.4, 4002844.3 00:14:39.205 00:14:39.205 Submit histogram 00:14:39.205 ================ 00:14:39.205 Range in us Cumulative Count 00:14:39.205 3.214 - 3.228: 0.0122% ( 2) 00:14:39.205 3.228 - 3.242: 0.0306% ( 3) 00:14:39.205 3.242 - 3.256: 0.0550% ( 4) 00:14:39.205 3.256 - 3.270: 0.1101% ( 9) 00:14:39.205 3.270 - 3.283: 0.2629% ( 25) 00:14:39.205 3.283 - 3.297: 0.9111% ( 106) 00:14:39.205 3.297 - 3.311: 3.7119% ( 458) 00:14:39.205 3.311 - 3.325: 8.3104% ( 752) 00:14:39.205 3.325 - 3.339: 13.5205% ( 852) 00:14:39.205 3.339 - 3.353: 19.7334% ( 1016) 00:14:39.205 3.353 - 3.367: 26.0931% ( 1040) 00:14:39.205 3.367 - 3.381: 31.9880% ( 964) 00:14:39.205 3.381 - 3.395: 37.8157% ( 953) 00:14:39.205 3.395 - 3.409: 43.1603% ( 874) 00:14:39.205 3.409 - 3.423: 47.6793% ( 739) 00:14:39.205 3.423 - 3.437: 51.6725% ( 653) 00:14:39.205 3.437 - 3.450: 57.1944% ( 903) 00:14:39.205 3.450 - 3.464: 64.8199% ( 1247) 00:14:39.205 3.464 - 3.478: 69.5652% ( 776) 00:14:39.205 3.478 - 3.492: 73.3933% ( 626) 00:14:39.205 3.492 - 3.506: 78.0040% ( 754) 00:14:39.205 3.506 - 3.520: 81.8382% ( 627) 00:14:39.205 3.520 - 3.534: 84.2842% ( 400) 00:14:39.205 3.534 - 3.548: 85.7029% ( 232) 00:14:39.205 3.548 - 3.562: 86.4000% ( 114) 00:14:39.205 3.562 - 3.590: 87.2929% ( 146) 00:14:39.205 3.590 - 3.617: 88.6565% ( 223) 00:14:39.205 3.617 - 3.645: 90.4544% ( 294) 00:14:39.205 3.645 - 3.673: 92.4051% ( 319) 00:14:39.205 3.673 - 3.701: 94.0256% ( 265) 00:14:39.205 3.701 - 3.729: 95.7194% ( 277) 00:14:39.205 3.729 - 3.757: 97.2360% ( 248) 00:14:39.205 3.757 - 3.784: 98.1899% ( 156) 00:14:39.205 3.784 - 3.812: 98.8381% ( 106) 00:14:39.205 3.812 - 3.840: 99.2417% ( 66) 00:14:39.205 3.840 - 3.868: 99.4802% ( 39) 00:14:39.205 3.868 - 3.896: 99.5475% ( 11) 00:14:39.205 3.896 - 3.923: 99.5842% ( 6) 00:14:39.205 3.923 - 3.951: 99.5964% ( 2) 00:14:39.205 3.951 - 3.979: 99.6025% ( 1) 00:14:39.205 3.979 - 4.007: 99.6209% ( 3) 00:14:39.205 4.090 - 4.118: 99.6270% ( 1) 00:14:39.205 4.842 - 4.870: 99.6331% ( 1) 00:14:39.205 5.064 - 5.092: 99.6392% ( 1) 00:14:39.205 5.120 - 5.148: 99.6453% ( 1) 00:14:39.205 5.343 - 5.370: 99.6514% ( 1) 00:14:39.205 5.370 - 5.398: 99.6576% ( 1) 00:14:39.205 5.482 - 5.510: 99.6637% ( 1) 00:14:39.205 5.510 - 5.537: 99.6698% ( 1) 00:14:39.205 5.537 - 5.565: 99.6759% ( 1) 00:14:39.205 5.565 - 5.593: 99.6820% ( 1) 00:14:39.205 5.593 - 5.621: 99.6881% ( 1) 00:14:39.205 5.843 - 5.871: 99.6942% ( 1) 00:14:39.205 5.899 - 5.927: 99.7004% ( 1) 00:14:39.205 6.038 - 6.066: 99.7126% ( 2) 00:14:39.205 6.066 - 6.094: 99.7187% ( 1) 00:14:39.205 6.122 - 6.150: 99.7248% ( 1) 00:14:39.205 6.205 - 6.233: 99.7309% ( 1) 00:14:39.205 6.261 - 6.289: 99.7371% ( 1) 00:14:39.205 6.372 - 6.400: 99.7554% ( 3) 00:14:39.205 6.428 - 6.456: 99.7615% ( 1) 00:14:39.205 6.483 - 6.511: 99.7676% ( 1) 00:14:39.205 6.567 - 6.595: 99.7737% ( 1) 00:14:39.205 6.595 - 6.623: 99.7799% ( 1) 00:14:39.205 6.623 - 6.650: 99.7982% ( 3) 00:14:39.205 6.678 - 6.706: 99.8043% ( 1) 00:14:39.205 6.706 - 6.734: 99.8165% ( 2) 00:14:39.206 6.790 - 6.817: 99.8227% ( 1) 00:14:39.206 6.845 - 6.873: 99.8349% ( 2) 00:14:39.206 6.929 - 6.957: 99.8471% ( 2) 00:14:39.206 7.012 - 7.040: 99.8594% ( 2) 00:14:39.206 7.123 - 7.179: 99.8655% ( 1) 00:14:39.206 7.235 - 7.290: 99.8716% ( 1) 00:14:39.206 7.569 - 7.624: 99.8838% ( 2) 00:14:39.206 7.958 - 8.014: 99.8899% ( 1) 00:14:39.206 8.014 - 8.070: 99.8960% ( 1) 00:14:39.206 8.570 - 8.626: 99.9022% ( 1) 00:14:39.206 9.294 - 9.350: 99.9083% ( 1) 00:14:39.206 13.857 - 13.913: 99.9144% ( 1) 00:14:39.206 [2024-07-26 13:56:06.557380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.206 15.471 - 15.583: 99.9205% ( 1) 00:14:39.206 15.694 - 15.805: 99.9266% ( 1) 00:14:39.206 19.367 - 19.478: 99.9327% ( 1) 00:14:39.206 3989.148 - 4017.642: 100.0000% ( 11) 00:14:39.206 00:14:39.206 Complete histogram 00:14:39.206 ================== 00:14:39.206 Range in us Cumulative Count 00:14:39.206 1.753 - 1.760: 0.0061% ( 1) 00:14:39.206 1.760 - 1.767: 0.0612% ( 9) 00:14:39.206 1.767 - 1.774: 0.2324% ( 28) 00:14:39.206 1.774 - 1.781: 0.3241% ( 15) 00:14:39.206 1.781 - 1.795: 0.3730% ( 8) 00:14:39.206 1.795 - 1.809: 0.9845% ( 100) 00:14:39.206 1.809 - 1.823: 24.9434% ( 3918) 00:14:39.206 1.823 - 1.837: 68.8803% ( 7185) 00:14:39.206 1.837 - 1.850: 76.4447% ( 1237) 00:14:39.206 1.850 - 1.864: 81.9850% ( 906) 00:14:39.206 1.864 - 1.878: 91.0720% ( 1486) 00:14:39.206 1.878 - 1.892: 95.1385% ( 665) 00:14:39.206 1.892 - 1.906: 97.3583% ( 363) 00:14:39.206 1.906 - 1.920: 98.4162% ( 173) 00:14:39.206 1.920 - 1.934: 98.7097% ( 48) 00:14:39.206 1.934 - 1.948: 98.9482% ( 39) 00:14:39.206 1.948 - 1.962: 99.1133% ( 27) 00:14:39.206 1.962 - 1.976: 99.1622% ( 8) 00:14:39.206 1.976 - 1.990: 99.1989% ( 6) 00:14:39.206 1.990 - 2.003: 99.2234% ( 4) 00:14:39.206 2.003 - 2.017: 99.2478% ( 4) 00:14:39.206 2.017 - 2.031: 99.2662% ( 3) 00:14:39.206 2.031 - 2.045: 99.2784% ( 2) 00:14:39.206 2.045 - 2.059: 99.2907% ( 2) 00:14:39.206 2.059 - 2.073: 99.2968% ( 1) 00:14:39.206 2.129 - 2.143: 99.3029% ( 1) 00:14:39.206 2.157 - 2.170: 99.3151% ( 2) 00:14:39.206 2.240 - 2.254: 99.3212% ( 1) 00:14:39.206 2.254 - 2.268: 99.3273% ( 1) 00:14:39.206 3.409 - 3.423: 99.3335% ( 1) 00:14:39.206 3.868 - 3.896: 99.3396% ( 1) 00:14:39.206 4.146 - 4.174: 99.3457% ( 1) 00:14:39.206 4.313 - 4.341: 99.3579% ( 2) 00:14:39.206 4.369 - 4.397: 99.3640% ( 1) 00:14:39.206 4.536 - 4.563: 99.3763% ( 2) 00:14:39.206 4.563 - 4.591: 99.3824% ( 1) 00:14:39.206 4.647 - 4.675: 99.3885% ( 1) 00:14:39.206 4.675 - 4.703: 99.3946% ( 1) 00:14:39.206 4.703 - 4.730: 99.4007% ( 1) 00:14:39.206 4.758 - 4.786: 99.4068% ( 1) 00:14:39.206 4.870 - 4.897: 99.4130% ( 1) 00:14:39.206 5.009 - 5.037: 99.4191% ( 1) 00:14:39.206 5.092 - 5.120: 99.4252% ( 1) 00:14:39.206 5.343 - 5.370: 99.4313% ( 1) 00:14:39.206 5.482 - 5.510: 99.4374% ( 1) 00:14:39.206 5.510 - 5.537: 99.4435% ( 1) 00:14:39.206 5.677 - 5.704: 99.4496% ( 1) 00:14:39.206 5.760 - 5.788: 99.4558% ( 1) 00:14:39.206 5.871 - 5.899: 99.4619% ( 1) 00:14:39.206 5.955 - 5.983: 99.4680% ( 1) 00:14:39.206 5.983 - 6.010: 99.4741% ( 1) 00:14:39.206 6.066 - 6.094: 99.4802% ( 1) 00:14:39.206 6.094 - 6.122: 99.4863% ( 1) 00:14:39.206 6.177 - 6.205: 99.4924% ( 1) 00:14:39.206 6.205 - 6.233: 99.4986% ( 1) 00:14:39.206 6.372 - 6.400: 99.5047% ( 1) 00:14:39.206 6.400 - 6.428: 99.5108% ( 1) 00:14:39.206 3989.148 - 4017.642: 100.0000% ( 80) 00:14:39.206 00:14:39.206 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:39.206 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:39.206 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:39.206 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:39.206 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:39.467 [ 00:14:39.467 { 00:14:39.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:39.467 "subtype": "Discovery", 00:14:39.467 "listen_addresses": [], 00:14:39.467 "allow_any_host": true, 00:14:39.467 "hosts": [] 00:14:39.467 }, 00:14:39.467 { 00:14:39.467 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:39.467 "subtype": "NVMe", 00:14:39.467 "listen_addresses": [ 00:14:39.467 { 00:14:39.467 "trtype": "VFIOUSER", 00:14:39.467 "adrfam": "IPv4", 00:14:39.467 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:39.467 "trsvcid": "0" 00:14:39.467 } 00:14:39.467 ], 00:14:39.467 "allow_any_host": true, 00:14:39.467 "hosts": [], 00:14:39.467 "serial_number": "SPDK1", 00:14:39.467 "model_number": "SPDK bdev Controller", 00:14:39.467 "max_namespaces": 32, 00:14:39.467 "min_cntlid": 1, 00:14:39.467 "max_cntlid": 65519, 00:14:39.467 "namespaces": [ 00:14:39.467 { 00:14:39.467 "nsid": 1, 00:14:39.467 "bdev_name": "Malloc1", 00:14:39.467 "name": "Malloc1", 00:14:39.467 "nguid": "04A83ACB452144CFBB49EA4E454F8730", 00:14:39.467 "uuid": "04a83acb-4521-44cf-bb49-ea4e454f8730" 00:14:39.467 } 00:14:39.467 ] 00:14:39.467 }, 00:14:39.467 { 00:14:39.467 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:39.467 "subtype": "NVMe", 00:14:39.467 "listen_addresses": [ 00:14:39.467 { 00:14:39.467 "trtype": "VFIOUSER", 00:14:39.467 "adrfam": "IPv4", 00:14:39.467 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:39.467 "trsvcid": "0" 00:14:39.467 } 00:14:39.467 ], 00:14:39.467 "allow_any_host": true, 00:14:39.467 "hosts": [], 00:14:39.467 "serial_number": "SPDK2", 00:14:39.467 "model_number": "SPDK bdev Controller", 00:14:39.467 "max_namespaces": 32, 00:14:39.467 "min_cntlid": 1, 00:14:39.467 "max_cntlid": 65519, 00:14:39.467 "namespaces": [ 00:14:39.467 { 00:14:39.467 "nsid": 1, 00:14:39.467 "bdev_name": "Malloc2", 00:14:39.467 "name": "Malloc2", 00:14:39.467 "nguid": "3010DFC97BEC42B8860608C99F87E686", 00:14:39.467 "uuid": "3010dfc9-7bec-42b8-8606-08c99f87e686" 00:14:39.467 } 00:14:39.467 ] 00:14:39.467 } 00:14:39.467 ] 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2940012 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:39.467 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:39.467 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.727 [2024-07-26 13:56:06.924934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.727 Malloc3 00:14:39.727 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:39.727 [2024-07-26 13:56:07.151613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.020 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:40.020 Asynchronous Event Request test 00:14:40.020 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.020 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.020 Registering asynchronous event callbacks... 00:14:40.020 Starting namespace attribute notice tests for all controllers... 00:14:40.020 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:40.020 aer_cb - Changed Namespace 00:14:40.020 Cleaning up... 00:14:40.020 [ 00:14:40.020 { 00:14:40.020 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:40.020 "subtype": "Discovery", 00:14:40.020 "listen_addresses": [], 00:14:40.020 "allow_any_host": true, 00:14:40.020 "hosts": [] 00:14:40.020 }, 00:14:40.020 { 00:14:40.020 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:40.020 "subtype": "NVMe", 00:14:40.020 "listen_addresses": [ 00:14:40.020 { 00:14:40.020 "trtype": "VFIOUSER", 00:14:40.020 "adrfam": "IPv4", 00:14:40.020 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:40.020 "trsvcid": "0" 00:14:40.020 } 00:14:40.020 ], 00:14:40.020 "allow_any_host": true, 00:14:40.020 "hosts": [], 00:14:40.020 "serial_number": "SPDK1", 00:14:40.020 "model_number": "SPDK bdev Controller", 00:14:40.020 "max_namespaces": 32, 00:14:40.020 "min_cntlid": 1, 00:14:40.020 "max_cntlid": 65519, 00:14:40.020 "namespaces": [ 00:14:40.020 { 00:14:40.020 "nsid": 1, 00:14:40.020 "bdev_name": "Malloc1", 00:14:40.020 "name": "Malloc1", 00:14:40.020 "nguid": "04A83ACB452144CFBB49EA4E454F8730", 00:14:40.020 "uuid": "04a83acb-4521-44cf-bb49-ea4e454f8730" 00:14:40.020 }, 00:14:40.020 { 00:14:40.020 "nsid": 2, 00:14:40.020 "bdev_name": "Malloc3", 00:14:40.020 "name": "Malloc3", 00:14:40.020 "nguid": "9D7194746CD14B0899CA32F9218F36B9", 00:14:40.020 "uuid": "9d719474-6cd1-4b08-99ca-32f9218f36b9" 00:14:40.020 } 00:14:40.020 ] 00:14:40.020 }, 00:14:40.020 { 00:14:40.020 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:40.020 "subtype": "NVMe", 00:14:40.020 "listen_addresses": [ 00:14:40.020 { 00:14:40.020 "trtype": "VFIOUSER", 00:14:40.020 "adrfam": "IPv4", 00:14:40.020 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:40.020 "trsvcid": "0" 00:14:40.020 } 00:14:40.020 ], 00:14:40.020 "allow_any_host": true, 00:14:40.020 "hosts": [], 00:14:40.020 "serial_number": "SPDK2", 00:14:40.020 "model_number": "SPDK bdev Controller", 00:14:40.020 "max_namespaces": 32, 00:14:40.020 "min_cntlid": 1, 00:14:40.020 "max_cntlid": 65519, 00:14:40.020 "namespaces": [ 00:14:40.020 { 00:14:40.020 "nsid": 1, 00:14:40.020 "bdev_name": "Malloc2", 00:14:40.020 "name": "Malloc2", 00:14:40.020 "nguid": "3010DFC97BEC42B8860608C99F87E686", 00:14:40.020 "uuid": "3010dfc9-7bec-42b8-8606-08c99f87e686" 00:14:40.020 } 00:14:40.021 ] 00:14:40.021 } 00:14:40.021 ] 00:14:40.021 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2940012 00:14:40.021 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.021 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:40.021 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:40.021 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:40.021 [2024-07-26 13:56:07.395631] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:14:40.021 [2024-07-26 13:56:07.395664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940239 ] 00:14:40.021 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.021 [2024-07-26 13:56:07.423435] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:40.021 [2024-07-26 13:56:07.426003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.021 [2024-07-26 13:56:07.426023] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1f689f6000 00:14:40.308 [2024-07-26 13:56:07.426998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.428005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.429013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.430020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.431032] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.432034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.433041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.434054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.308 [2024-07-26 13:56:07.435065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.308 [2024-07-26 13:56:07.435075] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1f689eb000 00:14:40.308 [2024-07-26 13:56:07.436011] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.308 [2024-07-26 13:56:07.449494] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:40.308 [2024-07-26 13:56:07.449513] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:40.308 [2024-07-26 13:56:07.451579] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:40.308 [2024-07-26 13:56:07.451619] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:40.308 [2024-07-26 13:56:07.451688] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:40.308 [2024-07-26 13:56:07.451702] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:40.308 [2024-07-26 13:56:07.451707] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:40.308 [2024-07-26 13:56:07.452583] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:40.308 [2024-07-26 13:56:07.452595] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:40.308 [2024-07-26 13:56:07.452602] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:40.308 [2024-07-26 13:56:07.453588] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:40.308 [2024-07-26 13:56:07.453598] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:40.308 [2024-07-26 13:56:07.453604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:40.308 [2024-07-26 13:56:07.454595] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:40.308 [2024-07-26 13:56:07.454604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:40.308 [2024-07-26 13:56:07.455607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:40.308 [2024-07-26 13:56:07.455615] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:40.308 [2024-07-26 13:56:07.455620] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:40.308 [2024-07-26 13:56:07.455626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:40.308 [2024-07-26 13:56:07.455731] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:40.308 [2024-07-26 13:56:07.455735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:40.308 [2024-07-26 13:56:07.455740] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:40.308 [2024-07-26 13:56:07.456603] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:40.308 [2024-07-26 13:56:07.457609] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:40.308 [2024-07-26 13:56:07.458621] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:40.308 [2024-07-26 13:56:07.459623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:40.308 [2024-07-26 13:56:07.459660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:40.308 [2024-07-26 13:56:07.460632] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:40.308 [2024-07-26 13:56:07.460643] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:40.308 [2024-07-26 13:56:07.460648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:40.308 [2024-07-26 13:56:07.460664] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:40.308 [2024-07-26 13:56:07.460674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:40.308 [2024-07-26 13:56:07.460685] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.308 [2024-07-26 13:56:07.460690] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.308 [2024-07-26 13:56:07.460693] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.308 [2024-07-26 13:56:07.460705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.308 [2024-07-26 13:56:07.471052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.471064] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:40.309 [2024-07-26 13:56:07.471070] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:40.309 [2024-07-26 13:56:07.471075] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:40.309 [2024-07-26 13:56:07.471080] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:40.309 [2024-07-26 13:56:07.471085] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:40.309 [2024-07-26 13:56:07.471088] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:40.309 [2024-07-26 13:56:07.471093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.471099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.471112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.479049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.479064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.309 [2024-07-26 13:56:07.479072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.309 [2024-07-26 13:56:07.479079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.309 [2024-07-26 13:56:07.479087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.309 [2024-07-26 13:56:07.479092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.479099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.479109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.487048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.487056] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:40.309 [2024-07-26 13:56:07.487061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.487069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.487075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.487083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.495049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.495104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.495112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.495119] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:40.309 [2024-07-26 13:56:07.495123] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:40.309 [2024-07-26 13:56:07.495127] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.309 [2024-07-26 13:56:07.495133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.503049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.503060] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:40.309 [2024-07-26 13:56:07.503068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.503075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.503082] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.309 [2024-07-26 13:56:07.503086] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.309 [2024-07-26 13:56:07.503090] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.309 [2024-07-26 13:56:07.503095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.511048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.511063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.511071] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.511078] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.309 [2024-07-26 13:56:07.511082] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.309 [2024-07-26 13:56:07.511087] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.309 [2024-07-26 13:56:07.511093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.519048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.519057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519091] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:40.309 [2024-07-26 13:56:07.519096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:40.309 [2024-07-26 13:56:07.519100] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:40.309 [2024-07-26 13:56:07.519116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.526916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.526929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.538049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.538062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.546048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.546060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.554050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:40.309 [2024-07-26 13:56:07.554068] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:40.309 [2024-07-26 13:56:07.554073] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:40.309 [2024-07-26 13:56:07.554076] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:40.309 [2024-07-26 13:56:07.554079] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:40.309 [2024-07-26 13:56:07.554082] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:40.309 [2024-07-26 13:56:07.554088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:40.309 [2024-07-26 13:56:07.554096] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:40.309 [2024-07-26 13:56:07.554101] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:40.309 [2024-07-26 13:56:07.554104] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.309 [2024-07-26 13:56:07.554110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.554116] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:40.309 [2024-07-26 13:56:07.554120] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.309 [2024-07-26 13:56:07.554123] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.309 [2024-07-26 13:56:07.554128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.309 [2024-07-26 13:56:07.554135] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:40.309 [2024-07-26 13:56:07.554139] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:40.309 [2024-07-26 13:56:07.554142] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.309 [2024-07-26 13:56:07.554148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:40.310 [2024-07-26 13:56:07.562049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:40.310 [2024-07-26 13:56:07.562061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:40.310 [2024-07-26 13:56:07.562072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:40.310 [2024-07-26 13:56:07.562078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:40.310 ===================================================== 00:14:40.310 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:40.310 ===================================================== 00:14:40.310 Controller Capabilities/Features 00:14:40.310 ================================ 00:14:40.310 Vendor ID: 4e58 00:14:40.310 Subsystem Vendor ID: 4e58 00:14:40.310 Serial Number: SPDK2 00:14:40.310 Model Number: SPDK bdev Controller 00:14:40.310 Firmware Version: 24.09 00:14:40.310 Recommended Arb Burst: 6 00:14:40.310 IEEE OUI Identifier: 8d 6b 50 00:14:40.310 Multi-path I/O 00:14:40.310 May have multiple subsystem ports: Yes 00:14:40.310 May have multiple controllers: Yes 00:14:40.310 Associated with SR-IOV VF: No 00:14:40.310 Max Data Transfer Size: 131072 00:14:40.310 Max Number of Namespaces: 32 00:14:40.310 Max Number of I/O Queues: 127 00:14:40.310 NVMe Specification Version (VS): 1.3 00:14:40.310 NVMe Specification Version (Identify): 1.3 00:14:40.310 Maximum Queue Entries: 256 00:14:40.310 Contiguous Queues Required: Yes 00:14:40.310 Arbitration Mechanisms Supported 00:14:40.310 Weighted Round Robin: Not Supported 00:14:40.310 Vendor Specific: Not Supported 00:14:40.310 Reset Timeout: 15000 ms 00:14:40.310 Doorbell Stride: 4 bytes 00:14:40.310 NVM Subsystem Reset: Not Supported 00:14:40.310 Command Sets Supported 00:14:40.310 NVM Command Set: Supported 00:14:40.310 Boot Partition: Not Supported 00:14:40.310 Memory Page Size Minimum: 4096 bytes 00:14:40.310 Memory Page Size Maximum: 4096 bytes 00:14:40.310 Persistent Memory Region: Not Supported 00:14:40.310 Optional Asynchronous Events Supported 00:14:40.310 Namespace Attribute Notices: Supported 00:14:40.310 Firmware Activation Notices: Not Supported 00:14:40.310 ANA Change Notices: Not Supported 00:14:40.310 PLE Aggregate Log Change Notices: Not Supported 00:14:40.310 LBA Status Info Alert Notices: Not Supported 00:14:40.310 EGE Aggregate Log Change Notices: Not Supported 00:14:40.310 Normal NVM Subsystem Shutdown event: Not Supported 00:14:40.310 Zone Descriptor Change Notices: Not Supported 00:14:40.310 Discovery Log Change Notices: Not Supported 00:14:40.310 Controller Attributes 00:14:40.310 128-bit Host Identifier: Supported 00:14:40.310 Non-Operational Permissive Mode: Not Supported 00:14:40.310 NVM Sets: Not Supported 00:14:40.310 Read Recovery Levels: Not Supported 00:14:40.310 Endurance Groups: Not Supported 00:14:40.310 Predictable Latency Mode: Not Supported 00:14:40.310 Traffic Based Keep ALive: Not Supported 00:14:40.310 Namespace Granularity: Not Supported 00:14:40.310 SQ Associations: Not Supported 00:14:40.310 UUID List: Not Supported 00:14:40.310 Multi-Domain Subsystem: Not Supported 00:14:40.310 Fixed Capacity Management: Not Supported 00:14:40.310 Variable Capacity Management: Not Supported 00:14:40.310 Delete Endurance Group: Not Supported 00:14:40.310 Delete NVM Set: Not Supported 00:14:40.310 Extended LBA Formats Supported: Not Supported 00:14:40.310 Flexible Data Placement Supported: Not Supported 00:14:40.310 00:14:40.310 Controller Memory Buffer Support 00:14:40.310 ================================ 00:14:40.310 Supported: No 00:14:40.310 00:14:40.310 Persistent Memory Region Support 00:14:40.310 ================================ 00:14:40.310 Supported: No 00:14:40.310 00:14:40.310 Admin Command Set Attributes 00:14:40.310 ============================ 00:14:40.310 Security Send/Receive: Not Supported 00:14:40.310 Format NVM: Not Supported 00:14:40.310 Firmware Activate/Download: Not Supported 00:14:40.310 Namespace Management: Not Supported 00:14:40.310 Device Self-Test: Not Supported 00:14:40.310 Directives: Not Supported 00:14:40.310 NVMe-MI: Not Supported 00:14:40.310 Virtualization Management: Not Supported 00:14:40.310 Doorbell Buffer Config: Not Supported 00:14:40.310 Get LBA Status Capability: Not Supported 00:14:40.310 Command & Feature Lockdown Capability: Not Supported 00:14:40.310 Abort Command Limit: 4 00:14:40.310 Async Event Request Limit: 4 00:14:40.310 Number of Firmware Slots: N/A 00:14:40.310 Firmware Slot 1 Read-Only: N/A 00:14:40.310 Firmware Activation Without Reset: N/A 00:14:40.310 Multiple Update Detection Support: N/A 00:14:40.310 Firmware Update Granularity: No Information Provided 00:14:40.310 Per-Namespace SMART Log: No 00:14:40.310 Asymmetric Namespace Access Log Page: Not Supported 00:14:40.310 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:40.310 Command Effects Log Page: Supported 00:14:40.310 Get Log Page Extended Data: Supported 00:14:40.310 Telemetry Log Pages: Not Supported 00:14:40.310 Persistent Event Log Pages: Not Supported 00:14:40.310 Supported Log Pages Log Page: May Support 00:14:40.310 Commands Supported & Effects Log Page: Not Supported 00:14:40.310 Feature Identifiers & Effects Log Page:May Support 00:14:40.310 NVMe-MI Commands & Effects Log Page: May Support 00:14:40.310 Data Area 4 for Telemetry Log: Not Supported 00:14:40.310 Error Log Page Entries Supported: 128 00:14:40.310 Keep Alive: Supported 00:14:40.310 Keep Alive Granularity: 10000 ms 00:14:40.310 00:14:40.310 NVM Command Set Attributes 00:14:40.310 ========================== 00:14:40.310 Submission Queue Entry Size 00:14:40.310 Max: 64 00:14:40.310 Min: 64 00:14:40.310 Completion Queue Entry Size 00:14:40.310 Max: 16 00:14:40.310 Min: 16 00:14:40.310 Number of Namespaces: 32 00:14:40.310 Compare Command: Supported 00:14:40.310 Write Uncorrectable Command: Not Supported 00:14:40.310 Dataset Management Command: Supported 00:14:40.310 Write Zeroes Command: Supported 00:14:40.310 Set Features Save Field: Not Supported 00:14:40.310 Reservations: Not Supported 00:14:40.310 Timestamp: Not Supported 00:14:40.310 Copy: Supported 00:14:40.310 Volatile Write Cache: Present 00:14:40.310 Atomic Write Unit (Normal): 1 00:14:40.310 Atomic Write Unit (PFail): 1 00:14:40.310 Atomic Compare & Write Unit: 1 00:14:40.310 Fused Compare & Write: Supported 00:14:40.310 Scatter-Gather List 00:14:40.310 SGL Command Set: Supported (Dword aligned) 00:14:40.310 SGL Keyed: Not Supported 00:14:40.310 SGL Bit Bucket Descriptor: Not Supported 00:14:40.310 SGL Metadata Pointer: Not Supported 00:14:40.310 Oversized SGL: Not Supported 00:14:40.310 SGL Metadata Address: Not Supported 00:14:40.310 SGL Offset: Not Supported 00:14:40.310 Transport SGL Data Block: Not Supported 00:14:40.310 Replay Protected Memory Block: Not Supported 00:14:40.310 00:14:40.310 Firmware Slot Information 00:14:40.310 ========================= 00:14:40.310 Active slot: 1 00:14:40.310 Slot 1 Firmware Revision: 24.09 00:14:40.310 00:14:40.310 00:14:40.310 Commands Supported and Effects 00:14:40.310 ============================== 00:14:40.310 Admin Commands 00:14:40.310 -------------- 00:14:40.310 Get Log Page (02h): Supported 00:14:40.310 Identify (06h): Supported 00:14:40.310 Abort (08h): Supported 00:14:40.310 Set Features (09h): Supported 00:14:40.310 Get Features (0Ah): Supported 00:14:40.310 Asynchronous Event Request (0Ch): Supported 00:14:40.310 Keep Alive (18h): Supported 00:14:40.310 I/O Commands 00:14:40.310 ------------ 00:14:40.310 Flush (00h): Supported LBA-Change 00:14:40.310 Write (01h): Supported LBA-Change 00:14:40.310 Read (02h): Supported 00:14:40.310 Compare (05h): Supported 00:14:40.310 Write Zeroes (08h): Supported LBA-Change 00:14:40.310 Dataset Management (09h): Supported LBA-Change 00:14:40.310 Copy (19h): Supported LBA-Change 00:14:40.310 00:14:40.310 Error Log 00:14:40.310 ========= 00:14:40.310 00:14:40.310 Arbitration 00:14:40.310 =========== 00:14:40.310 Arbitration Burst: 1 00:14:40.310 00:14:40.310 Power Management 00:14:40.310 ================ 00:14:40.310 Number of Power States: 1 00:14:40.310 Current Power State: Power State #0 00:14:40.310 Power State #0: 00:14:40.310 Max Power: 0.00 W 00:14:40.310 Non-Operational State: Operational 00:14:40.310 Entry Latency: Not Reported 00:14:40.310 Exit Latency: Not Reported 00:14:40.310 Relative Read Throughput: 0 00:14:40.310 Relative Read Latency: 0 00:14:40.310 Relative Write Throughput: 0 00:14:40.310 Relative Write Latency: 0 00:14:40.311 Idle Power: Not Reported 00:14:40.311 Active Power: Not Reported 00:14:40.311 Non-Operational Permissive Mode: Not Supported 00:14:40.311 00:14:40.311 Health Information 00:14:40.311 ================== 00:14:40.311 Critical Warnings: 00:14:40.311 Available Spare Space: OK 00:14:40.311 Temperature: OK 00:14:40.311 Device Reliability: OK 00:14:40.311 Read Only: No 00:14:40.311 Volatile Memory Backup: OK 00:14:40.311 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:40.311 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:40.311 Available Spare: 0% 00:14:40.311 Available Sp[2024-07-26 13:56:07.562168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:40.311 [2024-07-26 13:56:07.570049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:40.311 [2024-07-26 13:56:07.570075] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:40.311 [2024-07-26 13:56:07.570084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.311 [2024-07-26 13:56:07.570090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.311 [2024-07-26 13:56:07.570095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.311 [2024-07-26 13:56:07.570101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.311 [2024-07-26 13:56:07.570153] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:40.311 [2024-07-26 13:56:07.570163] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:40.311 [2024-07-26 13:56:07.571156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:40.311 [2024-07-26 13:56:07.571202] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:40.311 [2024-07-26 13:56:07.571209] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:40.311 [2024-07-26 13:56:07.572161] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:40.311 [2024-07-26 13:56:07.572172] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:40.311 [2024-07-26 13:56:07.572218] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:40.311 [2024-07-26 13:56:07.575048] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.311 are Threshold: 0% 00:14:40.311 Life Percentage Used: 0% 00:14:40.311 Data Units Read: 0 00:14:40.311 Data Units Written: 0 00:14:40.311 Host Read Commands: 0 00:14:40.311 Host Write Commands: 0 00:14:40.311 Controller Busy Time: 0 minutes 00:14:40.311 Power Cycles: 0 00:14:40.311 Power On Hours: 0 hours 00:14:40.311 Unsafe Shutdowns: 0 00:14:40.311 Unrecoverable Media Errors: 0 00:14:40.311 Lifetime Error Log Entries: 0 00:14:40.311 Warning Temperature Time: 0 minutes 00:14:40.311 Critical Temperature Time: 0 minutes 00:14:40.311 00:14:40.311 Number of Queues 00:14:40.311 ================ 00:14:40.311 Number of I/O Submission Queues: 127 00:14:40.311 Number of I/O Completion Queues: 127 00:14:40.311 00:14:40.311 Active Namespaces 00:14:40.311 ================= 00:14:40.311 Namespace ID:1 00:14:40.311 Error Recovery Timeout: Unlimited 00:14:40.311 Command Set Identifier: NVM (00h) 00:14:40.311 Deallocate: Supported 00:14:40.311 Deallocated/Unwritten Error: Not Supported 00:14:40.311 Deallocated Read Value: Unknown 00:14:40.311 Deallocate in Write Zeroes: Not Supported 00:14:40.311 Deallocated Guard Field: 0xFFFF 00:14:40.311 Flush: Supported 00:14:40.311 Reservation: Supported 00:14:40.311 Namespace Sharing Capabilities: Multiple Controllers 00:14:40.311 Size (in LBAs): 131072 (0GiB) 00:14:40.311 Capacity (in LBAs): 131072 (0GiB) 00:14:40.311 Utilization (in LBAs): 131072 (0GiB) 00:14:40.311 NGUID: 3010DFC97BEC42B8860608C99F87E686 00:14:40.311 UUID: 3010dfc9-7bec-42b8-8606-08c99f87e686 00:14:40.311 Thin Provisioning: Not Supported 00:14:40.311 Per-NS Atomic Units: Yes 00:14:40.311 Atomic Boundary Size (Normal): 0 00:14:40.311 Atomic Boundary Size (PFail): 0 00:14:40.311 Atomic Boundary Offset: 0 00:14:40.311 Maximum Single Source Range Length: 65535 00:14:40.311 Maximum Copy Length: 65535 00:14:40.311 Maximum Source Range Count: 1 00:14:40.311 NGUID/EUI64 Never Reused: No 00:14:40.311 Namespace Write Protected: No 00:14:40.311 Number of LBA Formats: 1 00:14:40.311 Current LBA Format: LBA Format #00 00:14:40.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:40.311 00:14:40.311 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:40.311 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.571 [2024-07-26 13:56:07.787414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.853 Initializing NVMe Controllers 00:14:45.853 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:45.853 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:45.853 Initialization complete. Launching workers. 00:14:45.853 ======================================================== 00:14:45.853 Latency(us) 00:14:45.853 Device Information : IOPS MiB/s Average min max 00:14:45.853 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39915.13 155.92 3206.38 958.16 6871.53 00:14:45.853 ======================================================== 00:14:45.853 Total : 39915.13 155.92 3206.38 958.16 6871.53 00:14:45.853 00:14:45.854 [2024-07-26 13:56:12.894315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.854 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:45.854 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.854 [2024-07-26 13:56:13.118971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.135 Initializing NVMe Controllers 00:14:51.135 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:51.136 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:51.136 Initialization complete. Launching workers. 00:14:51.136 ======================================================== 00:14:51.136 Latency(us) 00:14:51.136 Device Information : IOPS MiB/s Average min max 00:14:51.136 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.25 156.04 3204.04 1003.33 6595.14 00:14:51.136 ======================================================== 00:14:51.136 Total : 39947.25 156.04 3204.04 1003.33 6595.14 00:14:51.136 00:14:51.136 [2024-07-26 13:56:18.141711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.136 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:51.136 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.136 [2024-07-26 13:56:18.334124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.418 [2024-07-26 13:56:23.484138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.418 Initializing NVMe Controllers 00:14:56.418 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:56.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:56.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:56.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:56.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:56.418 Initialization complete. Launching workers. 00:14:56.418 Starting thread on core 2 00:14:56.418 Starting thread on core 3 00:14:56.418 Starting thread on core 1 00:14:56.418 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:56.418 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.418 [2024-07-26 13:56:23.761014] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.709 [2024-07-26 13:56:26.817636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.709 Initializing NVMe Controllers 00:14:59.709 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.709 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.709 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:59.709 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:59.709 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:59.709 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:59.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:59.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:59.709 Initialization complete. Launching workers. 00:14:59.709 Starting thread on core 1 with urgent priority queue 00:14:59.709 Starting thread on core 2 with urgent priority queue 00:14:59.709 Starting thread on core 3 with urgent priority queue 00:14:59.710 Starting thread on core 0 with urgent priority queue 00:14:59.710 SPDK bdev Controller (SPDK2 ) core 0: 10027.00 IO/s 9.97 secs/100000 ios 00:14:59.710 SPDK bdev Controller (SPDK2 ) core 1: 8004.00 IO/s 12.49 secs/100000 ios 00:14:59.710 SPDK bdev Controller (SPDK2 ) core 2: 9096.67 IO/s 10.99 secs/100000 ios 00:14:59.710 SPDK bdev Controller (SPDK2 ) core 3: 10377.67 IO/s 9.64 secs/100000 ios 00:14:59.710 ======================================================== 00:14:59.710 00:14:59.710 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:59.710 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.710 [2024-07-26 13:56:27.085455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.710 Initializing NVMe Controllers 00:14:59.710 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.710 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.710 Namespace ID: 1 size: 0GB 00:14:59.710 Initialization complete. 00:14:59.710 INFO: using host memory buffer for IO 00:14:59.710 Hello world! 00:14:59.710 [2024-07-26 13:56:27.097529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.710 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:59.968 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.969 [2024-07-26 13:56:27.367950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.346 Initializing NVMe Controllers 00:15:01.346 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.346 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.346 Initialization complete. Launching workers. 00:15:01.346 submit (in ns) avg, min, max = 7744.9, 3226.1, 5992980.0 00:15:01.346 complete (in ns) avg, min, max = 17916.2, 1759.1, 6990137.4 00:15:01.346 00:15:01.346 Submit histogram 00:15:01.346 ================ 00:15:01.346 Range in us Cumulative Count 00:15:01.346 3.214 - 3.228: 0.0061% ( 1) 00:15:01.346 3.228 - 3.242: 0.0122% ( 1) 00:15:01.346 3.256 - 3.270: 0.0367% ( 4) 00:15:01.346 3.270 - 3.283: 0.0489% ( 2) 00:15:01.346 3.283 - 3.297: 0.2139% ( 27) 00:15:01.346 3.297 - 3.311: 1.1978% ( 161) 00:15:01.346 3.311 - 3.325: 4.2718% ( 503) 00:15:01.346 3.325 - 3.339: 9.3443% ( 830) 00:15:01.346 3.339 - 3.353: 15.2723% ( 970) 00:15:01.346 3.353 - 3.367: 21.4508% ( 1011) 00:15:01.346 3.367 - 3.381: 27.5989% ( 1006) 00:15:01.346 3.381 - 3.395: 32.5368% ( 808) 00:15:01.346 3.395 - 3.409: 37.6215% ( 832) 00:15:01.346 3.409 - 3.423: 42.9139% ( 866) 00:15:01.346 3.423 - 3.437: 47.3324% ( 723) 00:15:01.346 3.437 - 3.450: 51.3842% ( 663) 00:15:01.346 3.450 - 3.464: 56.4200% ( 824) 00:15:01.346 3.464 - 3.478: 62.8919% ( 1059) 00:15:01.346 3.478 - 3.492: 67.7015% ( 787) 00:15:01.346 3.492 - 3.506: 72.1995% ( 736) 00:15:01.346 3.506 - 3.520: 77.2780% ( 831) 00:15:01.346 3.520 - 3.534: 81.4337% ( 680) 00:15:01.346 3.534 - 3.548: 84.2144% ( 455) 00:15:01.346 3.548 - 3.562: 85.6994% ( 243) 00:15:01.346 3.562 - 3.590: 87.0317% ( 218) 00:15:01.346 3.590 - 3.617: 88.1501% ( 183) 00:15:01.346 3.617 - 3.645: 89.8307% ( 275) 00:15:01.346 3.645 - 3.673: 91.5113% ( 275) 00:15:01.346 3.673 - 3.701: 93.1308% ( 265) 00:15:01.346 3.701 - 3.729: 95.0498% ( 314) 00:15:01.346 3.729 - 3.757: 96.6510% ( 262) 00:15:01.346 3.757 - 3.784: 97.9099% ( 206) 00:15:01.346 3.784 - 3.812: 98.6127% ( 115) 00:15:01.346 3.812 - 3.840: 99.0894% ( 78) 00:15:01.346 3.840 - 3.868: 99.3705% ( 46) 00:15:01.346 3.868 - 3.896: 99.4928% ( 20) 00:15:01.346 3.896 - 3.923: 99.5539% ( 10) 00:15:01.346 3.923 - 3.951: 99.5783% ( 4) 00:15:01.346 3.979 - 4.007: 99.5967% ( 3) 00:15:01.346 4.063 - 4.090: 99.6028% ( 1) 00:15:01.346 4.925 - 4.953: 99.6089% ( 1) 00:15:01.346 5.092 - 5.120: 99.6150% ( 1) 00:15:01.346 5.120 - 5.148: 99.6211% ( 1) 00:15:01.346 5.231 - 5.259: 99.6272% ( 1) 00:15:01.346 5.398 - 5.426: 99.6333% ( 1) 00:15:01.346 5.426 - 5.454: 99.6394% ( 1) 00:15:01.346 5.510 - 5.537: 99.6455% ( 1) 00:15:01.347 5.565 - 5.593: 99.6517% ( 1) 00:15:01.347 5.621 - 5.649: 99.6700% ( 3) 00:15:01.347 5.649 - 5.677: 99.6761% ( 1) 00:15:01.347 5.704 - 5.732: 99.6822% ( 1) 00:15:01.347 5.732 - 5.760: 99.6944% ( 2) 00:15:01.347 5.788 - 5.816: 99.7005% ( 1) 00:15:01.347 5.843 - 5.871: 99.7067% ( 1) 00:15:01.347 5.899 - 5.927: 99.7128% ( 1) 00:15:01.347 5.927 - 5.955: 99.7189% ( 1) 00:15:01.347 5.955 - 5.983: 99.7250% ( 1) 00:15:01.347 5.983 - 6.010: 99.7311% ( 1) 00:15:01.347 6.038 - 6.066: 99.7433% ( 2) 00:15:01.347 6.233 - 6.261: 99.7494% ( 1) 00:15:01.347 6.317 - 6.344: 99.7555% ( 1) 00:15:01.347 6.456 - 6.483: 99.7617% ( 1) 00:15:01.347 6.595 - 6.623: 99.7678% ( 1) 00:15:01.347 6.706 - 6.734: 99.7739% ( 1) 00:15:01.347 6.845 - 6.873: 99.7800% ( 1) 00:15:01.347 6.929 - 6.957: 99.7861% ( 1) 00:15:01.347 7.012 - 7.040: 99.7922% ( 1) 00:15:01.347 7.123 - 7.179: 99.7983% ( 1) 00:15:01.347 7.235 - 7.290: 99.8044% ( 1) 00:15:01.347 7.346 - 7.402: 99.8105% ( 1) 00:15:01.347 7.402 - 7.457: 99.8167% ( 1) 00:15:01.347 7.457 - 7.513: 99.8350% ( 3) 00:15:01.347 7.513 - 7.569: 99.8411% ( 1) 00:15:01.347 7.569 - 7.624: 99.8472% ( 1) 00:15:01.347 7.680 - 7.736: 99.8533% ( 1) 00:15:01.347 7.847 - 7.903: 99.8594% ( 1) 00:15:01.347 7.903 - 7.958: 99.8656% ( 1) 00:15:01.347 7.958 - 8.014: 99.8717% ( 1) 00:15:01.347 8.237 - 8.292: 99.8778% ( 1) 00:15:01.347 [2024-07-26 13:56:28.462090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.347 9.294 - 9.350: 99.8839% ( 1) 00:15:01.347 9.628 - 9.683: 99.8900% ( 1) 00:15:01.347 11.631 - 11.687: 99.8961% ( 1) 00:15:01.347 3989.148 - 4017.642: 99.9939% ( 16) 00:15:01.347 5983.722 - 6012.216: 100.0000% ( 1) 00:15:01.347 00:15:01.347 Complete histogram 00:15:01.347 ================== 00:15:01.347 Range in us Cumulative Count 00:15:01.347 1.753 - 1.760: 0.0061% ( 1) 00:15:01.347 1.760 - 1.767: 0.0244% ( 3) 00:15:01.347 1.767 - 1.774: 0.0367% ( 2) 00:15:01.347 1.774 - 1.781: 0.0672% ( 5) 00:15:01.347 1.795 - 1.809: 0.3972% ( 54) 00:15:01.347 1.809 - 1.823: 10.3832% ( 1634) 00:15:01.347 1.823 - 1.837: 25.5821% ( 2487) 00:15:01.347 1.837 - 1.850: 29.6889% ( 672) 00:15:01.347 1.850 - 1.864: 47.8030% ( 2964) 00:15:01.347 1.864 - 1.878: 84.1716% ( 5951) 00:15:01.347 1.878 - 1.892: 93.1064% ( 1462) 00:15:01.347 1.892 - 1.906: 96.2599% ( 516) 00:15:01.347 1.906 - 1.920: 97.7632% ( 246) 00:15:01.347 1.920 - 1.934: 98.2888% ( 86) 00:15:01.347 1.934 - 1.948: 98.8694% ( 95) 00:15:01.347 1.948 - 1.962: 99.1872% ( 52) 00:15:01.347 1.962 - 1.976: 99.2850% ( 16) 00:15:01.347 1.976 - 1.990: 99.3400% ( 9) 00:15:01.347 1.990 - 2.003: 99.3705% ( 5) 00:15:01.347 2.003 - 2.017: 99.3828% ( 2) 00:15:01.347 2.017 - 2.031: 99.3889% ( 1) 00:15:01.347 2.031 - 2.045: 99.4072% ( 3) 00:15:01.347 2.045 - 2.059: 99.4194% ( 2) 00:15:01.347 2.115 - 2.129: 99.4255% ( 1) 00:15:01.347 2.184 - 2.198: 99.4316% ( 1) 00:15:01.347 2.212 - 2.226: 99.4378% ( 1) 00:15:01.347 2.268 - 2.282: 99.4439% ( 1) 00:15:01.347 2.310 - 2.323: 99.4500% ( 1) 00:15:01.347 2.407 - 2.421: 99.4561% ( 1) 00:15:01.347 2.435 - 2.449: 99.4622% ( 1) 00:15:01.347 3.562 - 3.590: 99.4683% ( 1) 00:15:01.347 3.673 - 3.701: 99.4744% ( 1) 00:15:01.347 3.923 - 3.951: 99.4805% ( 1) 00:15:01.347 4.174 - 4.202: 99.4866% ( 1) 00:15:01.347 4.285 - 4.313: 99.4928% ( 1) 00:15:01.347 4.870 - 4.897: 99.5050% ( 2) 00:15:01.347 5.259 - 5.287: 99.5111% ( 1) 00:15:01.347 5.287 - 5.315: 99.5172% ( 1) 00:15:01.347 5.343 - 5.370: 99.5233% ( 1) 00:15:01.347 5.510 - 5.537: 99.5294% ( 1) 00:15:01.347 5.537 - 5.565: 99.5355% ( 1) 00:15:01.347 5.677 - 5.704: 99.5416% ( 1) 00:15:01.347 5.760 - 5.788: 99.5478% ( 1) 00:15:01.347 5.871 - 5.899: 99.5600% ( 2) 00:15:01.347 5.927 - 5.955: 99.5661% ( 1) 00:15:01.347 6.539 - 6.567: 99.5722% ( 1) 00:15:01.347 6.678 - 6.706: 99.5783% ( 1) 00:15:01.347 6.734 - 6.762: 99.5844% ( 1) 00:15:01.347 7.903 - 7.958: 99.5905% ( 1) 00:15:01.347 10.685 - 10.741: 99.5967% ( 1) 00:15:01.347 39.402 - 39.624: 99.6028% ( 1) 00:15:01.347 3989.148 - 4017.642: 99.9939% ( 64) 00:15:01.347 6981.009 - 7009.503: 100.0000% ( 1) 00:15:01.347 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.347 [ 00:15:01.347 { 00:15:01.347 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.347 "subtype": "Discovery", 00:15:01.347 "listen_addresses": [], 00:15:01.347 "allow_any_host": true, 00:15:01.347 "hosts": [] 00:15:01.347 }, 00:15:01.347 { 00:15:01.347 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.347 "subtype": "NVMe", 00:15:01.347 "listen_addresses": [ 00:15:01.347 { 00:15:01.347 "trtype": "VFIOUSER", 00:15:01.347 "adrfam": "IPv4", 00:15:01.347 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.347 "trsvcid": "0" 00:15:01.347 } 00:15:01.347 ], 00:15:01.347 "allow_any_host": true, 00:15:01.347 "hosts": [], 00:15:01.347 "serial_number": "SPDK1", 00:15:01.347 "model_number": "SPDK bdev Controller", 00:15:01.347 "max_namespaces": 32, 00:15:01.347 "min_cntlid": 1, 00:15:01.347 "max_cntlid": 65519, 00:15:01.347 "namespaces": [ 00:15:01.347 { 00:15:01.347 "nsid": 1, 00:15:01.347 "bdev_name": "Malloc1", 00:15:01.347 "name": "Malloc1", 00:15:01.347 "nguid": "04A83ACB452144CFBB49EA4E454F8730", 00:15:01.347 "uuid": "04a83acb-4521-44cf-bb49-ea4e454f8730" 00:15:01.347 }, 00:15:01.347 { 00:15:01.347 "nsid": 2, 00:15:01.347 "bdev_name": "Malloc3", 00:15:01.347 "name": "Malloc3", 00:15:01.347 "nguid": "9D7194746CD14B0899CA32F9218F36B9", 00:15:01.347 "uuid": "9d719474-6cd1-4b08-99ca-32f9218f36b9" 00:15:01.347 } 00:15:01.347 ] 00:15:01.347 }, 00:15:01.347 { 00:15:01.347 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.347 "subtype": "NVMe", 00:15:01.347 "listen_addresses": [ 00:15:01.347 { 00:15:01.347 "trtype": "VFIOUSER", 00:15:01.347 "adrfam": "IPv4", 00:15:01.347 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.347 "trsvcid": "0" 00:15:01.347 } 00:15:01.347 ], 00:15:01.347 "allow_any_host": true, 00:15:01.347 "hosts": [], 00:15:01.347 "serial_number": "SPDK2", 00:15:01.347 "model_number": "SPDK bdev Controller", 00:15:01.347 "max_namespaces": 32, 00:15:01.347 "min_cntlid": 1, 00:15:01.347 "max_cntlid": 65519, 00:15:01.347 "namespaces": [ 00:15:01.347 { 00:15:01.347 "nsid": 1, 00:15:01.347 "bdev_name": "Malloc2", 00:15:01.347 "name": "Malloc2", 00:15:01.347 "nguid": "3010DFC97BEC42B8860608C99F87E686", 00:15:01.347 "uuid": "3010dfc9-7bec-42b8-8606-08c99f87e686" 00:15:01.347 } 00:15:01.347 ] 00:15:01.347 } 00:15:01.347 ] 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2943690 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:01.347 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:01.347 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.607 [2024-07-26 13:56:28.819756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.607 Malloc4 00:15:01.607 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:01.867 [2024-07-26 13:56:29.045457] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.867 Asynchronous Event Request test 00:15:01.867 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.867 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.867 Registering asynchronous event callbacks... 00:15:01.867 Starting namespace attribute notice tests for all controllers... 00:15:01.867 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:01.867 aer_cb - Changed Namespace 00:15:01.867 Cleaning up... 00:15:01.867 [ 00:15:01.867 { 00:15:01.867 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.867 "subtype": "Discovery", 00:15:01.867 "listen_addresses": [], 00:15:01.867 "allow_any_host": true, 00:15:01.867 "hosts": [] 00:15:01.867 }, 00:15:01.867 { 00:15:01.867 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.867 "subtype": "NVMe", 00:15:01.867 "listen_addresses": [ 00:15:01.867 { 00:15:01.867 "trtype": "VFIOUSER", 00:15:01.867 "adrfam": "IPv4", 00:15:01.867 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.867 "trsvcid": "0" 00:15:01.867 } 00:15:01.867 ], 00:15:01.867 "allow_any_host": true, 00:15:01.867 "hosts": [], 00:15:01.867 "serial_number": "SPDK1", 00:15:01.867 "model_number": "SPDK bdev Controller", 00:15:01.867 "max_namespaces": 32, 00:15:01.867 "min_cntlid": 1, 00:15:01.867 "max_cntlid": 65519, 00:15:01.867 "namespaces": [ 00:15:01.867 { 00:15:01.867 "nsid": 1, 00:15:01.867 "bdev_name": "Malloc1", 00:15:01.867 "name": "Malloc1", 00:15:01.867 "nguid": "04A83ACB452144CFBB49EA4E454F8730", 00:15:01.867 "uuid": "04a83acb-4521-44cf-bb49-ea4e454f8730" 00:15:01.867 }, 00:15:01.867 { 00:15:01.867 "nsid": 2, 00:15:01.867 "bdev_name": "Malloc3", 00:15:01.867 "name": "Malloc3", 00:15:01.867 "nguid": "9D7194746CD14B0899CA32F9218F36B9", 00:15:01.867 "uuid": "9d719474-6cd1-4b08-99ca-32f9218f36b9" 00:15:01.867 } 00:15:01.867 ] 00:15:01.867 }, 00:15:01.867 { 00:15:01.867 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.867 "subtype": "NVMe", 00:15:01.867 "listen_addresses": [ 00:15:01.867 { 00:15:01.867 "trtype": "VFIOUSER", 00:15:01.867 "adrfam": "IPv4", 00:15:01.867 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.867 "trsvcid": "0" 00:15:01.867 } 00:15:01.867 ], 00:15:01.867 "allow_any_host": true, 00:15:01.867 "hosts": [], 00:15:01.867 "serial_number": "SPDK2", 00:15:01.867 "model_number": "SPDK bdev Controller", 00:15:01.867 "max_namespaces": 32, 00:15:01.867 "min_cntlid": 1, 00:15:01.867 "max_cntlid": 65519, 00:15:01.867 "namespaces": [ 00:15:01.867 { 00:15:01.867 "nsid": 1, 00:15:01.867 "bdev_name": "Malloc2", 00:15:01.867 "name": "Malloc2", 00:15:01.867 "nguid": "3010DFC97BEC42B8860608C99F87E686", 00:15:01.867 "uuid": "3010dfc9-7bec-42b8-8606-08c99f87e686" 00:15:01.867 }, 00:15:01.867 { 00:15:01.867 "nsid": 2, 00:15:01.867 "bdev_name": "Malloc4", 00:15:01.867 "name": "Malloc4", 00:15:01.867 "nguid": "1A95E0871879470B83F594FFF5A67CE9", 00:15:01.867 "uuid": "1a95e087-1879-470b-83f5-94fff5a67ce9" 00:15:01.867 } 00:15:01.867 ] 00:15:01.867 } 00:15:01.867 ] 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2943690 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2936061 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2936061 ']' 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2936061 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2936061 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2936061' 00:15:01.867 killing process with pid 2936061 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2936061 00:15:01.867 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2936061 00:15:02.128 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2943826 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2943826' 00:15:02.388 Process pid: 2943826 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2943826 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2943826 ']' 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.388 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:02.388 [2024-07-26 13:56:29.614362] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:02.388 [2024-07-26 13:56:29.615277] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:15:02.388 [2024-07-26 13:56:29.615317] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.388 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.388 [2024-07-26 13:56:29.670607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.388 [2024-07-26 13:56:29.751346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.388 [2024-07-26 13:56:29.751385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.388 [2024-07-26 13:56:29.751392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.388 [2024-07-26 13:56:29.751401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.388 [2024-07-26 13:56:29.751406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.388 [2024-07-26 13:56:29.751454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.388 [2024-07-26 13:56:29.751552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.388 [2024-07-26 13:56:29.751613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.388 [2024-07-26 13:56:29.751614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.648 [2024-07-26 13:56:29.827468] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:02.648 [2024-07-26 13:56:29.827596] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:02.648 [2024-07-26 13:56:29.827819] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:02.648 [2024-07-26 13:56:29.828115] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:02.648 [2024-07-26 13:56:29.828329] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:03.218 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.218 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:03.218 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:04.158 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:04.418 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:04.418 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:04.418 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.418 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:04.418 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:04.418 Malloc1 00:15:04.418 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:04.679 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:05.010 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:05.010 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.010 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:05.010 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:05.270 Malloc2 00:15:05.270 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:05.529 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:05.529 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2943826 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2943826 ']' 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2943826 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2943826 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2943826' 00:15:05.789 killing process with pid 2943826 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2943826 00:15:05.789 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2943826 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:06.047 00:15:06.047 real 0m51.260s 00:15:06.047 user 3m22.945s 00:15:06.047 sys 0m3.568s 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:06.047 ************************************ 00:15:06.047 END TEST nvmf_vfio_user 00:15:06.047 ************************************ 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.047 ************************************ 00:15:06.047 START TEST nvmf_vfio_user_nvme_compliance 00:15:06.047 ************************************ 00:15:06.047 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:06.307 * Looking for test storage... 00:15:06.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.307 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2944478 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2944478' 00:15:06.308 Process pid: 2944478 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2944478 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2944478 ']' 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.308 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:06.308 [2024-07-26 13:56:33.582175] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:15:06.308 [2024-07-26 13:56:33.582228] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.308 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.308 [2024-07-26 13:56:33.637659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.308 [2024-07-26 13:56:33.711202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.308 [2024-07-26 13:56:33.711244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.308 [2024-07-26 13:56:33.711251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.308 [2024-07-26 13:56:33.711257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.308 [2024-07-26 13:56:33.711262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.308 [2024-07-26 13:56:33.711312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.308 [2024-07-26 13:56:33.711410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.308 [2024-07-26 13:56:33.711412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.246 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.246 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:07.246 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 malloc0 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.185 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:08.185 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.185 00:15:08.185 00:15:08.185 CUnit - A unit testing framework for C - Version 2.1-3 00:15:08.185 http://cunit.sourceforge.net/ 00:15:08.185 00:15:08.185 00:15:08.185 Suite: nvme_compliance 00:15:08.185 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 13:56:35.612508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.185 [2024-07-26 13:56:35.613850] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:08.185 [2024-07-26 13:56:35.613865] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:08.185 [2024-07-26 13:56:35.613871] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:08.185 [2024-07-26 13:56:35.615531] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.444 passed 00:15:08.444 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 13:56:35.693044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.444 [2024-07-26 13:56:35.697066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.444 passed 00:15:08.444 Test: admin_identify_ns ...[2024-07-26 13:56:35.776510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.444 [2024-07-26 13:56:35.836057] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:08.444 [2024-07-26 13:56:35.844055] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:08.444 [2024-07-26 13:56:35.868167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.703 passed 00:15:08.703 Test: admin_get_features_mandatory_features ...[2024-07-26 13:56:35.941433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.703 [2024-07-26 13:56:35.944453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.703 passed 00:15:08.704 Test: admin_get_features_optional_features ...[2024-07-26 13:56:36.024999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.704 [2024-07-26 13:56:36.028021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.704 passed 00:15:08.704 Test: admin_set_features_number_of_queues ...[2024-07-26 13:56:36.105516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.963 [2024-07-26 13:56:36.213133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.963 passed 00:15:08.963 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 13:56:36.288289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.963 [2024-07-26 13:56:36.291306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.963 passed 00:15:08.963 Test: admin_get_log_page_with_lpo ...[2024-07-26 13:56:36.369588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.222 [2024-07-26 13:56:36.440063] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:09.222 [2024-07-26 13:56:36.453105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.222 passed 00:15:09.222 Test: fabric_property_get ...[2024-07-26 13:56:36.528268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.223 [2024-07-26 13:56:36.529523] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:09.223 [2024-07-26 13:56:36.531294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.223 passed 00:15:09.223 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 13:56:36.612844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.223 [2024-07-26 13:56:36.614089] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:09.223 [2024-07-26 13:56:36.615866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.223 passed 00:15:09.482 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 13:56:36.691492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.482 [2024-07-26 13:56:36.778052] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:09.482 [2024-07-26 13:56:36.794047] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:09.482 [2024-07-26 13:56:36.799134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.482 passed 00:15:09.482 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 13:56:36.874256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.482 [2024-07-26 13:56:36.875486] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:09.482 [2024-07-26 13:56:36.878289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.482 passed 00:15:09.741 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 13:56:36.956218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.741 [2024-07-26 13:56:37.033049] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:09.741 [2024-07-26 13:56:37.057054] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:09.741 [2024-07-26 13:56:37.062135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.741 passed 00:15:09.741 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 13:56:37.135273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.741 [2024-07-26 13:56:37.136499] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:09.741 [2024-07-26 13:56:37.136520] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:09.741 [2024-07-26 13:56:37.138306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.741 passed 00:15:10.000 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 13:56:37.216215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.000 [2024-07-26 13:56:37.309054] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:10.000 [2024-07-26 13:56:37.317048] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:10.000 [2024-07-26 13:56:37.325059] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:10.000 [2024-07-26 13:56:37.333052] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:10.000 [2024-07-26 13:56:37.362132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.000 passed 00:15:10.260 Test: admin_create_io_sq_verify_pc ...[2024-07-26 13:56:37.439831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.260 [2024-07-26 13:56:37.455054] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:10.260 [2024-07-26 13:56:37.472984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.260 passed 00:15:10.260 Test: admin_create_io_qp_max_qps ...[2024-07-26 13:56:37.550534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.198 [2024-07-26 13:56:38.634052] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:11.767 [2024-07-26 13:56:39.020359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.767 passed 00:15:11.767 Test: admin_create_io_sq_shared_cq ...[2024-07-26 13:56:39.098422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.028 [2024-07-26 13:56:39.234055] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:12.028 [2024-07-26 13:56:39.271124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.028 passed 00:15:12.028 00:15:12.028 Run Summary: Type Total Ran Passed Failed Inactive 00:15:12.028 suites 1 1 n/a 0 0 00:15:12.028 tests 18 18 18 0 0 00:15:12.028 asserts 360 360 360 0 n/a 00:15:12.028 00:15:12.028 Elapsed time = 1.503 seconds 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2944478 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2944478 ']' 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2944478 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2944478 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2944478' 00:15:12.028 killing process with pid 2944478 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2944478 00:15:12.028 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2944478 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:12.289 00:15:12.289 real 0m6.144s 00:15:12.289 user 0m17.576s 00:15:12.289 sys 0m0.450s 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.289 ************************************ 00:15:12.289 END TEST nvmf_vfio_user_nvme_compliance 00:15:12.289 ************************************ 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.289 ************************************ 00:15:12.289 START TEST nvmf_vfio_user_fuzz 00:15:12.289 ************************************ 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:12.289 * Looking for test storage... 00:15:12.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.289 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.550 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2945669 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2945669' 00:15:12.551 Process pid: 2945669 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2945669 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2945669 ']' 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.551 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.490 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.490 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:13.490 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.431 malloc0 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:14.431 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:46.539 Fuzzing completed. Shutting down the fuzz application 00:15:46.539 00:15:46.539 Dumping successful admin opcodes: 00:15:46.539 8, 9, 10, 24, 00:15:46.539 Dumping successful io opcodes: 00:15:46.539 0, 00:15:46.539 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1065703, total successful commands: 4202, random_seed: 4160865792 00:15:46.539 NS: 0x200003a1ef00 admin qp, Total commands completed: 263158, total successful commands: 2115, random_seed: 2497831104 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2945669 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2945669 ']' 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2945669 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2945669 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2945669' 00:15:46.539 killing process with pid 2945669 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2945669 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2945669 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:46.539 00:15:46.539 real 0m32.779s 00:15:46.539 user 0m31.763s 00:15:46.539 sys 0m30.297s 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.539 ************************************ 00:15:46.539 END TEST nvmf_vfio_user_fuzz 00:15:46.539 ************************************ 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.539 ************************************ 00:15:46.539 START TEST nvmf_auth_target 00:15:46.539 ************************************ 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:46.539 * Looking for test storage... 00:15:46.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.539 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.540 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:50.806 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:50.806 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:50.806 Found net devices under 0000:86:00.0: cvl_0_0 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:50.806 Found net devices under 0000:86:00.1: cvl_0_1 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:15:50.806 00:15:50.806 --- 10.0.0.2 ping statistics --- 00:15:50.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.806 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:15:50.806 00:15:50.806 --- 10.0.0.1 ping statistics --- 00:15:50.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.806 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2954487 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2954487 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2954487 ']' 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.806 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2954728 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.375 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e0dc5872b60cce55755293e88e5bd8fe77a9076b8f853289 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SGg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e0dc5872b60cce55755293e88e5bd8fe77a9076b8f853289 0 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e0dc5872b60cce55755293e88e5bd8fe77a9076b8f853289 0 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e0dc5872b60cce55755293e88e5bd8fe77a9076b8f853289 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SGg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SGg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.SGg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d31bfe175a2f16db44f8da1c02ca2ccd72b7e34f34b11b031461480c06e61891 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JvO 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d31bfe175a2f16db44f8da1c02ca2ccd72b7e34f34b11b031461480c06e61891 3 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d31bfe175a2f16db44f8da1c02ca2ccd72b7e34f34b11b031461480c06e61891 3 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d31bfe175a2f16db44f8da1c02ca2ccd72b7e34f34b11b031461480c06e61891 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JvO 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JvO 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.JvO 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b179ac6bd3ba9079e1e6862c16fb371b 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nyg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b179ac6bd3ba9079e1e6862c16fb371b 1 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b179ac6bd3ba9079e1e6862c16fb371b 1 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b179ac6bd3ba9079e1e6862c16fb371b 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nyg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nyg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.nyg 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:51.376 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9211cceb36cc7a41ea4634813ae62e2922d24d5da477d77a 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5rM 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9211cceb36cc7a41ea4634813ae62e2922d24d5da477d77a 2 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9211cceb36cc7a41ea4634813ae62e2922d24d5da477d77a 2 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9211cceb36cc7a41ea4634813ae62e2922d24d5da477d77a 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5rM 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5rM 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.5rM 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=14ba50fdcac4fca72724e36655c23131ee61475bcf996d35 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.txO 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 14ba50fdcac4fca72724e36655c23131ee61475bcf996d35 2 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 14ba50fdcac4fca72724e36655c23131ee61475bcf996d35 2 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.636 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=14ba50fdcac4fca72724e36655c23131ee61475bcf996d35 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.txO 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.txO 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.txO 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=92d889610656867aa1b96c2b7e0e1214 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.A2E 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 92d889610656867aa1b96c2b7e0e1214 1 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 92d889610656867aa1b96c2b7e0e1214 1 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=92d889610656867aa1b96c2b7e0e1214 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.A2E 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.A2E 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.A2E 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8a9126b298af720173e593b39809d8c0786e4faa74181463b8e0872956c1daea 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:51.637 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fKB 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8a9126b298af720173e593b39809d8c0786e4faa74181463b8e0872956c1daea 3 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8a9126b298af720173e593b39809d8c0786e4faa74181463b8e0872956c1daea 3 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8a9126b298af720173e593b39809d8c0786e4faa74181463b8e0872956c1daea 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fKB 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fKB 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.fKB 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2954487 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2954487 ']' 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.637 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.896 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.896 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2954728 /var/tmp/host.sock 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2954728 ']' 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:51.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.897 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SGg 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.156 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SGg 00:15:52.157 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SGg 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.JvO ]] 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JvO 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JvO 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JvO 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nyg 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nyg 00:15:52.417 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nyg 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.5rM ]] 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5rM 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5rM 00:15:52.677 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5rM 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.txO 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.txO 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.txO 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.A2E ]] 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A2E 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A2E 00:15:52.936 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A2E 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fKB 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.fKB 00:15:53.196 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.fKB 00:15:53.456 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:53.456 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:53.456 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.456 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.456 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.456 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.715 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.715 00:15:53.715 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.715 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.715 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.975 { 00:15:53.975 "cntlid": 1, 00:15:53.975 "qid": 0, 00:15:53.975 "state": "enabled", 00:15:53.975 "thread": "nvmf_tgt_poll_group_000", 00:15:53.975 "listen_address": { 00:15:53.975 "trtype": "TCP", 00:15:53.975 "adrfam": "IPv4", 00:15:53.975 "traddr": "10.0.0.2", 00:15:53.975 "trsvcid": "4420" 00:15:53.975 }, 00:15:53.975 "peer_address": { 00:15:53.975 "trtype": "TCP", 00:15:53.975 "adrfam": "IPv4", 00:15:53.975 "traddr": "10.0.0.1", 00:15:53.975 "trsvcid": "55024" 00:15:53.975 }, 00:15:53.975 "auth": { 00:15:53.975 "state": "completed", 00:15:53.975 "digest": "sha256", 00:15:53.975 "dhgroup": "null" 00:15:53.975 } 00:15:53.975 } 00:15:53.975 ]' 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.975 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.235 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.235 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.235 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.235 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.235 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.235 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.804 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.063 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.323 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.323 { 00:15:55.323 "cntlid": 3, 00:15:55.323 "qid": 0, 00:15:55.323 "state": "enabled", 00:15:55.323 "thread": "nvmf_tgt_poll_group_000", 00:15:55.323 "listen_address": { 00:15:55.323 "trtype": "TCP", 00:15:55.323 "adrfam": "IPv4", 00:15:55.323 "traddr": "10.0.0.2", 00:15:55.323 "trsvcid": "4420" 00:15:55.323 }, 00:15:55.323 "peer_address": { 00:15:55.323 "trtype": "TCP", 00:15:55.323 "adrfam": "IPv4", 00:15:55.323 "traddr": "10.0.0.1", 00:15:55.323 "trsvcid": "55056" 00:15:55.323 }, 00:15:55.323 "auth": { 00:15:55.323 "state": "completed", 00:15:55.323 "digest": "sha256", 00:15:55.323 "dhgroup": "null" 00:15:55.323 } 00:15:55.323 } 00:15:55.323 ]' 00:15:55.323 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.584 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.584 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.154 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.415 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.674 00:15:56.674 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.674 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.674 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.934 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.934 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.934 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.934 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.934 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.934 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.935 { 00:15:56.935 "cntlid": 5, 00:15:56.935 "qid": 0, 00:15:56.935 "state": "enabled", 00:15:56.935 "thread": "nvmf_tgt_poll_group_000", 00:15:56.935 "listen_address": { 00:15:56.935 "trtype": "TCP", 00:15:56.935 "adrfam": "IPv4", 00:15:56.935 "traddr": "10.0.0.2", 00:15:56.935 "trsvcid": "4420" 00:15:56.935 }, 00:15:56.935 "peer_address": { 00:15:56.935 "trtype": "TCP", 00:15:56.935 "adrfam": "IPv4", 00:15:56.935 "traddr": "10.0.0.1", 00:15:56.935 "trsvcid": "55086" 00:15:56.935 }, 00:15:56.935 "auth": { 00:15:56.935 "state": "completed", 00:15:56.935 "digest": "sha256", 00:15:56.935 "dhgroup": "null" 00:15:56.935 } 00:15:56.935 } 00:15:56.935 ]' 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.935 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.194 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:57.764 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.764 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.025 00:15:58.025 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.025 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.025 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.285 { 00:15:58.285 "cntlid": 7, 00:15:58.285 "qid": 0, 00:15:58.285 "state": "enabled", 00:15:58.285 "thread": "nvmf_tgt_poll_group_000", 00:15:58.285 "listen_address": { 00:15:58.285 "trtype": "TCP", 00:15:58.285 "adrfam": "IPv4", 00:15:58.285 "traddr": "10.0.0.2", 00:15:58.285 "trsvcid": "4420" 00:15:58.285 }, 00:15:58.285 "peer_address": { 00:15:58.285 "trtype": "TCP", 00:15:58.285 "adrfam": "IPv4", 00:15:58.285 "traddr": "10.0.0.1", 00:15:58.285 "trsvcid": "55106" 00:15:58.285 }, 00:15:58.285 "auth": { 00:15:58.285 "state": "completed", 00:15:58.285 "digest": "sha256", 00:15:58.285 "dhgroup": "null" 00:15:58.285 } 00:15:58.285 } 00:15:58.285 ]' 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:58.285 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.545 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.545 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.545 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.545 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.115 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.375 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.635 00:15:59.635 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.635 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.635 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.635 { 00:15:59.635 "cntlid": 9, 00:15:59.635 "qid": 0, 00:15:59.635 "state": "enabled", 00:15:59.635 "thread": "nvmf_tgt_poll_group_000", 00:15:59.635 "listen_address": { 00:15:59.635 "trtype": "TCP", 00:15:59.635 "adrfam": "IPv4", 00:15:59.635 "traddr": "10.0.0.2", 00:15:59.635 "trsvcid": "4420" 00:15:59.635 }, 00:15:59.635 "peer_address": { 00:15:59.635 "trtype": "TCP", 00:15:59.635 "adrfam": "IPv4", 00:15:59.635 "traddr": "10.0.0.1", 00:15:59.635 "trsvcid": "56362" 00:15:59.635 }, 00:15:59.635 "auth": { 00:15:59.635 "state": "completed", 00:15:59.635 "digest": "sha256", 00:15:59.635 "dhgroup": "ffdhe2048" 00:15:59.635 } 00:15:59.635 } 00:15:59.635 ]' 00:15:59.635 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.895 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.155 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.725 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.725 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.985 00:16:00.985 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.985 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.985 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.246 { 00:16:01.246 "cntlid": 11, 00:16:01.246 "qid": 0, 00:16:01.246 "state": "enabled", 00:16:01.246 "thread": "nvmf_tgt_poll_group_000", 00:16:01.246 "listen_address": { 00:16:01.246 "trtype": "TCP", 00:16:01.246 "adrfam": "IPv4", 00:16:01.246 "traddr": "10.0.0.2", 00:16:01.246 "trsvcid": "4420" 00:16:01.246 }, 00:16:01.246 "peer_address": { 00:16:01.246 "trtype": "TCP", 00:16:01.246 "adrfam": "IPv4", 00:16:01.246 "traddr": "10.0.0.1", 00:16:01.246 "trsvcid": "56380" 00:16:01.246 }, 00:16:01.246 "auth": { 00:16:01.246 "state": "completed", 00:16:01.246 "digest": "sha256", 00:16:01.246 "dhgroup": "ffdhe2048" 00:16:01.246 } 00:16:01.246 } 00:16:01.246 ]' 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.246 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.506 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.077 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.337 00:16:02.337 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.337 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.337 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.618 { 00:16:02.618 "cntlid": 13, 00:16:02.618 "qid": 0, 00:16:02.618 "state": "enabled", 00:16:02.618 "thread": "nvmf_tgt_poll_group_000", 00:16:02.618 "listen_address": { 00:16:02.618 "trtype": "TCP", 00:16:02.618 "adrfam": "IPv4", 00:16:02.618 "traddr": "10.0.0.2", 00:16:02.618 "trsvcid": "4420" 00:16:02.618 }, 00:16:02.618 "peer_address": { 00:16:02.618 "trtype": "TCP", 00:16:02.618 "adrfam": "IPv4", 00:16:02.618 "traddr": "10.0.0.1", 00:16:02.618 "trsvcid": "56410" 00:16:02.618 }, 00:16:02.618 "auth": { 00:16:02.618 "state": "completed", 00:16:02.618 "digest": "sha256", 00:16:02.618 "dhgroup": "ffdhe2048" 00:16:02.618 } 00:16:02.618 } 00:16:02.618 ]' 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.618 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.618 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.618 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.618 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.949 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.519 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.780 00:16:03.780 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.780 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.780 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.040 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.040 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.040 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.041 { 00:16:04.041 "cntlid": 15, 00:16:04.041 "qid": 0, 00:16:04.041 "state": "enabled", 00:16:04.041 "thread": "nvmf_tgt_poll_group_000", 00:16:04.041 "listen_address": { 00:16:04.041 "trtype": "TCP", 00:16:04.041 "adrfam": "IPv4", 00:16:04.041 "traddr": "10.0.0.2", 00:16:04.041 "trsvcid": "4420" 00:16:04.041 }, 00:16:04.041 "peer_address": { 00:16:04.041 "trtype": "TCP", 00:16:04.041 "adrfam": "IPv4", 00:16:04.041 "traddr": "10.0.0.1", 00:16:04.041 "trsvcid": "56442" 00:16:04.041 }, 00:16:04.041 "auth": { 00:16:04.041 "state": "completed", 00:16:04.041 "digest": "sha256", 00:16:04.041 "dhgroup": "ffdhe2048" 00:16:04.041 } 00:16:04.041 } 00:16:04.041 ]' 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.041 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.301 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.871 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.132 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.132 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.392 { 00:16:05.392 "cntlid": 17, 00:16:05.392 "qid": 0, 00:16:05.392 "state": "enabled", 00:16:05.392 "thread": "nvmf_tgt_poll_group_000", 00:16:05.392 "listen_address": { 00:16:05.392 "trtype": "TCP", 00:16:05.392 "adrfam": "IPv4", 00:16:05.392 "traddr": "10.0.0.2", 00:16:05.392 "trsvcid": "4420" 00:16:05.392 }, 00:16:05.392 "peer_address": { 00:16:05.392 "trtype": "TCP", 00:16:05.392 "adrfam": "IPv4", 00:16:05.392 "traddr": "10.0.0.1", 00:16:05.392 "trsvcid": "56474" 00:16:05.392 }, 00:16:05.392 "auth": { 00:16:05.392 "state": "completed", 00:16:05.392 "digest": "sha256", 00:16:05.392 "dhgroup": "ffdhe3072" 00:16:05.392 } 00:16:05.392 } 00:16:05.392 ]' 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.392 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.652 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.652 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.652 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.652 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:06.240 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.501 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.761 00:16:06.761 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.761 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.761 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.761 { 00:16:06.761 "cntlid": 19, 00:16:06.761 "qid": 0, 00:16:06.761 "state": "enabled", 00:16:06.761 "thread": "nvmf_tgt_poll_group_000", 00:16:06.761 "listen_address": { 00:16:06.761 "trtype": "TCP", 00:16:06.761 "adrfam": "IPv4", 00:16:06.761 "traddr": "10.0.0.2", 00:16:06.761 "trsvcid": "4420" 00:16:06.761 }, 00:16:06.761 "peer_address": { 00:16:06.761 "trtype": "TCP", 00:16:06.761 "adrfam": "IPv4", 00:16:06.761 "traddr": "10.0.0.1", 00:16:06.761 "trsvcid": "56488" 00:16:06.761 }, 00:16:06.761 "auth": { 00:16:06.761 "state": "completed", 00:16:06.761 "digest": "sha256", 00:16:06.761 "dhgroup": "ffdhe3072" 00:16:06.761 } 00:16:06.761 } 00:16:06.761 ]' 00:16:06.761 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.021 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.281 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.852 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.113 00:16:08.113 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.113 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.113 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.373 { 00:16:08.373 "cntlid": 21, 00:16:08.373 "qid": 0, 00:16:08.373 "state": "enabled", 00:16:08.373 "thread": "nvmf_tgt_poll_group_000", 00:16:08.373 "listen_address": { 00:16:08.373 "trtype": "TCP", 00:16:08.373 "adrfam": "IPv4", 00:16:08.373 "traddr": "10.0.0.2", 00:16:08.373 "trsvcid": "4420" 00:16:08.373 }, 00:16:08.373 "peer_address": { 00:16:08.373 "trtype": "TCP", 00:16:08.373 "adrfam": "IPv4", 00:16:08.373 "traddr": "10.0.0.1", 00:16:08.373 "trsvcid": "56516" 00:16:08.373 }, 00:16:08.373 "auth": { 00:16:08.373 "state": "completed", 00:16:08.373 "digest": "sha256", 00:16:08.373 "dhgroup": "ffdhe3072" 00:16:08.373 } 00:16:08.373 } 00:16:08.373 ]' 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.373 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.633 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.203 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.463 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.724 00:16:09.724 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.724 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.724 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.724 { 00:16:09.724 "cntlid": 23, 00:16:09.724 "qid": 0, 00:16:09.724 "state": "enabled", 00:16:09.724 "thread": "nvmf_tgt_poll_group_000", 00:16:09.724 "listen_address": { 00:16:09.724 "trtype": "TCP", 00:16:09.724 "adrfam": "IPv4", 00:16:09.724 "traddr": "10.0.0.2", 00:16:09.724 "trsvcid": "4420" 00:16:09.724 }, 00:16:09.724 "peer_address": { 00:16:09.724 "trtype": "TCP", 00:16:09.724 "adrfam": "IPv4", 00:16:09.724 "traddr": "10.0.0.1", 00:16:09.724 "trsvcid": "34890" 00:16:09.724 }, 00:16:09.724 "auth": { 00:16:09.724 "state": "completed", 00:16:09.724 "digest": "sha256", 00:16:09.724 "dhgroup": "ffdhe3072" 00:16:09.724 } 00:16:09.724 } 00:16:09.724 ]' 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.724 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.984 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.984 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.984 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.984 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.984 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.244 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:10.505 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.765 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.765 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.026 00:16:11.026 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.026 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.026 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.286 { 00:16:11.286 "cntlid": 25, 00:16:11.286 "qid": 0, 00:16:11.286 "state": "enabled", 00:16:11.286 "thread": "nvmf_tgt_poll_group_000", 00:16:11.286 "listen_address": { 00:16:11.286 "trtype": "TCP", 00:16:11.286 "adrfam": "IPv4", 00:16:11.286 "traddr": "10.0.0.2", 00:16:11.286 "trsvcid": "4420" 00:16:11.286 }, 00:16:11.286 "peer_address": { 00:16:11.286 "trtype": "TCP", 00:16:11.286 "adrfam": "IPv4", 00:16:11.286 "traddr": "10.0.0.1", 00:16:11.286 "trsvcid": "34922" 00:16:11.286 }, 00:16:11.286 "auth": { 00:16:11.286 "state": "completed", 00:16:11.286 "digest": "sha256", 00:16:11.286 "dhgroup": "ffdhe4096" 00:16:11.286 } 00:16:11.286 } 00:16:11.286 ]' 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.286 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.546 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.116 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.376 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.377 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.637 00:16:12.637 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.637 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.637 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.637 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.637 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.898 { 00:16:12.898 "cntlid": 27, 00:16:12.898 "qid": 0, 00:16:12.898 "state": "enabled", 00:16:12.898 "thread": "nvmf_tgt_poll_group_000", 00:16:12.898 "listen_address": { 00:16:12.898 "trtype": "TCP", 00:16:12.898 "adrfam": "IPv4", 00:16:12.898 "traddr": "10.0.0.2", 00:16:12.898 "trsvcid": "4420" 00:16:12.898 }, 00:16:12.898 "peer_address": { 00:16:12.898 "trtype": "TCP", 00:16:12.898 "adrfam": "IPv4", 00:16:12.898 "traddr": "10.0.0.1", 00:16:12.898 "trsvcid": "34960" 00:16:12.898 }, 00:16:12.898 "auth": { 00:16:12.898 "state": "completed", 00:16:12.898 "digest": "sha256", 00:16:12.898 "dhgroup": "ffdhe4096" 00:16:12.898 } 00:16:12.898 } 00:16:12.898 ]' 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.898 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.158 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.726 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.986 00:16:13.986 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.986 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.986 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.248 { 00:16:14.248 "cntlid": 29, 00:16:14.248 "qid": 0, 00:16:14.248 "state": "enabled", 00:16:14.248 "thread": "nvmf_tgt_poll_group_000", 00:16:14.248 "listen_address": { 00:16:14.248 "trtype": "TCP", 00:16:14.248 "adrfam": "IPv4", 00:16:14.248 "traddr": "10.0.0.2", 00:16:14.248 "trsvcid": "4420" 00:16:14.248 }, 00:16:14.248 "peer_address": { 00:16:14.248 "trtype": "TCP", 00:16:14.248 "adrfam": "IPv4", 00:16:14.248 "traddr": "10.0.0.1", 00:16:14.248 "trsvcid": "34976" 00:16:14.248 }, 00:16:14.248 "auth": { 00:16:14.248 "state": "completed", 00:16:14.248 "digest": "sha256", 00:16:14.248 "dhgroup": "ffdhe4096" 00:16:14.248 } 00:16:14.248 } 00:16:14.248 ]' 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.248 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.506 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.073 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.332 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.590 00:16:15.591 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.591 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.591 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.849 { 00:16:15.849 "cntlid": 31, 00:16:15.849 "qid": 0, 00:16:15.849 "state": "enabled", 00:16:15.849 "thread": "nvmf_tgt_poll_group_000", 00:16:15.849 "listen_address": { 00:16:15.849 "trtype": "TCP", 00:16:15.849 "adrfam": "IPv4", 00:16:15.849 "traddr": "10.0.0.2", 00:16:15.849 "trsvcid": "4420" 00:16:15.849 }, 00:16:15.849 "peer_address": { 00:16:15.849 "trtype": "TCP", 00:16:15.849 "adrfam": "IPv4", 00:16:15.849 "traddr": "10.0.0.1", 00:16:15.849 "trsvcid": "35006" 00:16:15.849 }, 00:16:15.849 "auth": { 00:16:15.849 "state": "completed", 00:16:15.849 "digest": "sha256", 00:16:15.849 "dhgroup": "ffdhe4096" 00:16:15.849 } 00:16:15.849 } 00:16:15.849 ]' 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.849 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.107 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:16.674 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.674 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.674 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.674 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.675 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.675 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.675 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.675 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.675 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.675 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.305 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.305 { 00:16:17.305 "cntlid": 33, 00:16:17.305 "qid": 0, 00:16:17.305 "state": "enabled", 00:16:17.305 "thread": "nvmf_tgt_poll_group_000", 00:16:17.305 "listen_address": { 00:16:17.305 "trtype": "TCP", 00:16:17.305 "adrfam": "IPv4", 00:16:17.305 "traddr": "10.0.0.2", 00:16:17.305 "trsvcid": "4420" 00:16:17.305 }, 00:16:17.305 "peer_address": { 00:16:17.305 "trtype": "TCP", 00:16:17.305 "adrfam": "IPv4", 00:16:17.305 "traddr": "10.0.0.1", 00:16:17.305 "trsvcid": "35020" 00:16:17.305 }, 00:16:17.305 "auth": { 00:16:17.305 "state": "completed", 00:16:17.305 "digest": "sha256", 00:16:17.305 "dhgroup": "ffdhe6144" 00:16:17.305 } 00:16:17.305 } 00:16:17.305 ]' 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.305 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.564 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.129 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.130 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.388 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.646 00:16:18.646 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.646 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.646 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.904 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.904 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.904 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.904 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.904 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.904 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.904 { 00:16:18.904 "cntlid": 35, 00:16:18.904 "qid": 0, 00:16:18.904 "state": "enabled", 00:16:18.904 "thread": "nvmf_tgt_poll_group_000", 00:16:18.904 "listen_address": { 00:16:18.904 "trtype": "TCP", 00:16:18.904 "adrfam": "IPv4", 00:16:18.904 "traddr": "10.0.0.2", 00:16:18.904 "trsvcid": "4420" 00:16:18.904 }, 00:16:18.904 "peer_address": { 00:16:18.904 "trtype": "TCP", 00:16:18.904 "adrfam": "IPv4", 00:16:18.904 "traddr": "10.0.0.1", 00:16:18.904 "trsvcid": "35056" 00:16:18.904 }, 00:16:18.904 "auth": { 00:16:18.904 "state": "completed", 00:16:18.904 "digest": "sha256", 00:16:18.904 "dhgroup": "ffdhe6144" 00:16:18.905 } 00:16:18.905 } 00:16:18.905 ]' 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.905 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.163 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:19.730 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.730 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.988 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.989 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.989 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.247 00:16:20.247 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.247 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.247 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.505 { 00:16:20.505 "cntlid": 37, 00:16:20.505 "qid": 0, 00:16:20.505 "state": "enabled", 00:16:20.505 "thread": "nvmf_tgt_poll_group_000", 00:16:20.505 "listen_address": { 00:16:20.505 "trtype": "TCP", 00:16:20.505 "adrfam": "IPv4", 00:16:20.505 "traddr": "10.0.0.2", 00:16:20.505 "trsvcid": "4420" 00:16:20.505 }, 00:16:20.505 "peer_address": { 00:16:20.505 "trtype": "TCP", 00:16:20.505 "adrfam": "IPv4", 00:16:20.505 "traddr": "10.0.0.1", 00:16:20.505 "trsvcid": "35426" 00:16:20.505 }, 00:16:20.505 "auth": { 00:16:20.505 "state": "completed", 00:16:20.505 "digest": "sha256", 00:16:20.505 "dhgroup": "ffdhe6144" 00:16:20.505 } 00:16:20.505 } 00:16:20.505 ]' 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.505 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.506 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.506 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.506 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.764 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:21.331 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.331 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.331 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.331 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.331 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.331 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.332 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.901 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.901 { 00:16:21.901 "cntlid": 39, 00:16:21.901 "qid": 0, 00:16:21.901 "state": "enabled", 00:16:21.901 "thread": "nvmf_tgt_poll_group_000", 00:16:21.901 "listen_address": { 00:16:21.901 "trtype": "TCP", 00:16:21.901 "adrfam": "IPv4", 00:16:21.901 "traddr": "10.0.0.2", 00:16:21.901 "trsvcid": "4420" 00:16:21.901 }, 00:16:21.901 "peer_address": { 00:16:21.901 "trtype": "TCP", 00:16:21.901 "adrfam": "IPv4", 00:16:21.901 "traddr": "10.0.0.1", 00:16:21.901 "trsvcid": "35450" 00:16:21.901 }, 00:16:21.901 "auth": { 00:16:21.901 "state": "completed", 00:16:21.901 "digest": "sha256", 00:16:21.901 "dhgroup": "ffdhe6144" 00:16:21.901 } 00:16:21.901 } 00:16:21.901 ]' 00:16:21.901 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.161 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.729 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.730 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.989 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.559 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.559 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.819 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.819 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.819 { 00:16:23.819 "cntlid": 41, 00:16:23.819 "qid": 0, 00:16:23.819 "state": "enabled", 00:16:23.819 "thread": "nvmf_tgt_poll_group_000", 00:16:23.819 "listen_address": { 00:16:23.819 "trtype": "TCP", 00:16:23.819 "adrfam": "IPv4", 00:16:23.819 "traddr": "10.0.0.2", 00:16:23.819 "trsvcid": "4420" 00:16:23.819 }, 00:16:23.819 "peer_address": { 00:16:23.819 "trtype": "TCP", 00:16:23.819 "adrfam": "IPv4", 00:16:23.819 "traddr": "10.0.0.1", 00:16:23.819 "trsvcid": "35472" 00:16:23.819 }, 00:16:23.819 "auth": { 00:16:23.819 "state": "completed", 00:16:23.819 "digest": "sha256", 00:16:23.819 "dhgroup": "ffdhe8192" 00:16:23.819 } 00:16:23.819 } 00:16:23.819 ]' 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.819 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.077 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.644 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.644 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.644 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.644 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.210 00:16:25.210 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.210 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.210 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.470 { 00:16:25.470 "cntlid": 43, 00:16:25.470 "qid": 0, 00:16:25.470 "state": "enabled", 00:16:25.470 "thread": "nvmf_tgt_poll_group_000", 00:16:25.470 "listen_address": { 00:16:25.470 "trtype": "TCP", 00:16:25.470 "adrfam": "IPv4", 00:16:25.470 "traddr": "10.0.0.2", 00:16:25.470 "trsvcid": "4420" 00:16:25.470 }, 00:16:25.470 "peer_address": { 00:16:25.470 "trtype": "TCP", 00:16:25.470 "adrfam": "IPv4", 00:16:25.470 "traddr": "10.0.0.1", 00:16:25.470 "trsvcid": "35512" 00:16:25.470 }, 00:16:25.470 "auth": { 00:16:25.470 "state": "completed", 00:16:25.470 "digest": "sha256", 00:16:25.470 "dhgroup": "ffdhe8192" 00:16:25.470 } 00:16:25.470 } 00:16:25.470 ]' 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.470 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.729 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.299 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.869 00:16:26.869 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.869 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.869 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.129 { 00:16:27.129 "cntlid": 45, 00:16:27.129 "qid": 0, 00:16:27.129 "state": "enabled", 00:16:27.129 "thread": "nvmf_tgt_poll_group_000", 00:16:27.129 "listen_address": { 00:16:27.129 "trtype": "TCP", 00:16:27.129 "adrfam": "IPv4", 00:16:27.129 "traddr": "10.0.0.2", 00:16:27.129 "trsvcid": "4420" 00:16:27.129 }, 00:16:27.129 "peer_address": { 00:16:27.129 "trtype": "TCP", 00:16:27.129 "adrfam": "IPv4", 00:16:27.129 "traddr": "10.0.0.1", 00:16:27.129 "trsvcid": "35548" 00:16:27.129 }, 00:16:27.129 "auth": { 00:16:27.129 "state": "completed", 00:16:27.129 "digest": "sha256", 00:16:27.129 "dhgroup": "ffdhe8192" 00:16:27.129 } 00:16:27.129 } 00:16:27.129 ]' 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.388 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.958 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.528 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.528 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.787 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.787 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.787 { 00:16:28.787 "cntlid": 47, 00:16:28.787 "qid": 0, 00:16:28.787 "state": "enabled", 00:16:28.787 "thread": "nvmf_tgt_poll_group_000", 00:16:28.787 "listen_address": { 00:16:28.787 "trtype": "TCP", 00:16:28.787 "adrfam": "IPv4", 00:16:28.787 "traddr": "10.0.0.2", 00:16:28.787 "trsvcid": "4420" 00:16:28.787 }, 00:16:28.787 "peer_address": { 00:16:28.787 "trtype": "TCP", 00:16:28.787 "adrfam": "IPv4", 00:16:28.787 "traddr": "10.0.0.1", 00:16:28.787 "trsvcid": "35562" 00:16:28.787 }, 00:16:28.787 "auth": { 00:16:28.787 "state": "completed", 00:16:28.787 "digest": "sha256", 00:16:28.787 "dhgroup": "ffdhe8192" 00:16:28.787 } 00:16:28.787 } 00:16:28.787 ]' 00:16:28.787 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.787 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.047 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:29.614 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.615 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.874 00:16:29.874 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.874 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.874 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.133 { 00:16:30.133 "cntlid": 49, 00:16:30.133 "qid": 0, 00:16:30.133 "state": "enabled", 00:16:30.133 "thread": "nvmf_tgt_poll_group_000", 00:16:30.133 "listen_address": { 00:16:30.133 "trtype": "TCP", 00:16:30.133 "adrfam": "IPv4", 00:16:30.133 "traddr": "10.0.0.2", 00:16:30.133 "trsvcid": "4420" 00:16:30.133 }, 00:16:30.133 "peer_address": { 00:16:30.133 "trtype": "TCP", 00:16:30.133 "adrfam": "IPv4", 00:16:30.133 "traddr": "10.0.0.1", 00:16:30.133 "trsvcid": "56810" 00:16:30.133 }, 00:16:30.133 "auth": { 00:16:30.133 "state": "completed", 00:16:30.133 "digest": "sha384", 00:16:30.133 "dhgroup": "null" 00:16:30.133 } 00:16:30.133 } 00:16:30.133 ]' 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.133 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.134 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.134 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.134 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.393 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.962 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.234 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.234 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.519 { 00:16:31.519 "cntlid": 51, 00:16:31.519 "qid": 0, 00:16:31.519 "state": "enabled", 00:16:31.519 "thread": "nvmf_tgt_poll_group_000", 00:16:31.519 "listen_address": { 00:16:31.519 "trtype": "TCP", 00:16:31.519 "adrfam": "IPv4", 00:16:31.519 "traddr": "10.0.0.2", 00:16:31.519 "trsvcid": "4420" 00:16:31.519 }, 00:16:31.519 "peer_address": { 00:16:31.519 "trtype": "TCP", 00:16:31.519 "adrfam": "IPv4", 00:16:31.519 "traddr": "10.0.0.1", 00:16:31.519 "trsvcid": "56820" 00:16:31.519 }, 00:16:31.519 "auth": { 00:16:31.519 "state": "completed", 00:16:31.519 "digest": "sha384", 00:16:31.519 "dhgroup": "null" 00:16:31.519 } 00:16:31.519 } 00:16:31.519 ]' 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:31.519 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.778 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.778 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.778 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.778 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.348 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.609 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.868 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.868 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.128 { 00:16:33.128 "cntlid": 53, 00:16:33.128 "qid": 0, 00:16:33.128 "state": "enabled", 00:16:33.128 "thread": "nvmf_tgt_poll_group_000", 00:16:33.128 "listen_address": { 00:16:33.128 "trtype": "TCP", 00:16:33.128 "adrfam": "IPv4", 00:16:33.128 "traddr": "10.0.0.2", 00:16:33.128 "trsvcid": "4420" 00:16:33.128 }, 00:16:33.128 "peer_address": { 00:16:33.128 "trtype": "TCP", 00:16:33.128 "adrfam": "IPv4", 00:16:33.128 "traddr": "10.0.0.1", 00:16:33.128 "trsvcid": "56840" 00:16:33.128 }, 00:16:33.128 "auth": { 00:16:33.128 "state": "completed", 00:16:33.128 "digest": "sha384", 00:16:33.128 "dhgroup": "null" 00:16:33.128 } 00:16:33.128 } 00:16:33.128 ]' 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.128 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.387 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.957 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.217 00:16:34.217 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.217 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.217 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.476 { 00:16:34.476 "cntlid": 55, 00:16:34.476 "qid": 0, 00:16:34.476 "state": "enabled", 00:16:34.476 "thread": "nvmf_tgt_poll_group_000", 00:16:34.476 "listen_address": { 00:16:34.476 "trtype": "TCP", 00:16:34.476 "adrfam": "IPv4", 00:16:34.476 "traddr": "10.0.0.2", 00:16:34.476 "trsvcid": "4420" 00:16:34.476 }, 00:16:34.476 "peer_address": { 00:16:34.476 "trtype": "TCP", 00:16:34.476 "adrfam": "IPv4", 00:16:34.476 "traddr": "10.0.0.1", 00:16:34.476 "trsvcid": "56846" 00:16:34.476 }, 00:16:34.476 "auth": { 00:16:34.476 "state": "completed", 00:16:34.476 "digest": "sha384", 00:16:34.476 "dhgroup": "null" 00:16:34.476 } 00:16:34.476 } 00:16:34.476 ]' 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.476 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.735 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:35.303 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.303 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.303 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.303 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.303 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.304 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.304 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.304 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.304 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.564 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.564 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.825 { 00:16:35.825 "cntlid": 57, 00:16:35.825 "qid": 0, 00:16:35.825 "state": "enabled", 00:16:35.825 "thread": "nvmf_tgt_poll_group_000", 00:16:35.825 "listen_address": { 00:16:35.825 "trtype": "TCP", 00:16:35.825 "adrfam": "IPv4", 00:16:35.825 "traddr": "10.0.0.2", 00:16:35.825 "trsvcid": "4420" 00:16:35.825 }, 00:16:35.825 "peer_address": { 00:16:35.825 "trtype": "TCP", 00:16:35.825 "adrfam": "IPv4", 00:16:35.825 "traddr": "10.0.0.1", 00:16:35.825 "trsvcid": "56866" 00:16:35.825 }, 00:16:35.825 "auth": { 00:16:35.825 "state": "completed", 00:16:35.825 "digest": "sha384", 00:16:35.825 "dhgroup": "ffdhe2048" 00:16:35.825 } 00:16:35.825 } 00:16:35.825 ]' 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.825 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.085 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.085 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.085 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.085 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.085 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.085 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.653 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.654 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.914 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.175 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.175 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.435 { 00:16:37.435 "cntlid": 59, 00:16:37.435 "qid": 0, 00:16:37.435 "state": "enabled", 00:16:37.435 "thread": "nvmf_tgt_poll_group_000", 00:16:37.435 "listen_address": { 00:16:37.435 "trtype": "TCP", 00:16:37.435 "adrfam": "IPv4", 00:16:37.435 "traddr": "10.0.0.2", 00:16:37.435 "trsvcid": "4420" 00:16:37.435 }, 00:16:37.435 "peer_address": { 00:16:37.435 "trtype": "TCP", 00:16:37.435 "adrfam": "IPv4", 00:16:37.435 "traddr": "10.0.0.1", 00:16:37.435 "trsvcid": "56890" 00:16:37.435 }, 00:16:37.435 "auth": { 00:16:37.435 "state": "completed", 00:16:37.435 "digest": "sha384", 00:16:37.435 "dhgroup": "ffdhe2048" 00:16:37.435 } 00:16:37.435 } 00:16:37.435 ]' 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.435 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.695 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.279 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.280 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.540 00:16:38.540 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.540 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.540 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.800 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.800 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.800 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.800 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.800 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.800 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.800 { 00:16:38.800 "cntlid": 61, 00:16:38.800 "qid": 0, 00:16:38.800 "state": "enabled", 00:16:38.800 "thread": "nvmf_tgt_poll_group_000", 00:16:38.800 "listen_address": { 00:16:38.801 "trtype": "TCP", 00:16:38.801 "adrfam": "IPv4", 00:16:38.801 "traddr": "10.0.0.2", 00:16:38.801 "trsvcid": "4420" 00:16:38.801 }, 00:16:38.801 "peer_address": { 00:16:38.801 "trtype": "TCP", 00:16:38.801 "adrfam": "IPv4", 00:16:38.801 "traddr": "10.0.0.1", 00:16:38.801 "trsvcid": "56918" 00:16:38.801 }, 00:16:38.801 "auth": { 00:16:38.801 "state": "completed", 00:16:38.801 "digest": "sha384", 00:16:38.801 "dhgroup": "ffdhe2048" 00:16:38.801 } 00:16:38.801 } 00:16:38.801 ]' 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.801 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.061 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.632 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.892 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.892 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.152 { 00:16:40.152 "cntlid": 63, 00:16:40.152 "qid": 0, 00:16:40.152 "state": "enabled", 00:16:40.152 "thread": "nvmf_tgt_poll_group_000", 00:16:40.152 "listen_address": { 00:16:40.152 "trtype": "TCP", 00:16:40.152 "adrfam": "IPv4", 00:16:40.152 "traddr": "10.0.0.2", 00:16:40.152 "trsvcid": "4420" 00:16:40.152 }, 00:16:40.152 "peer_address": { 00:16:40.152 "trtype": "TCP", 00:16:40.152 "adrfam": "IPv4", 00:16:40.152 "traddr": "10.0.0.1", 00:16:40.152 "trsvcid": "56908" 00:16:40.152 }, 00:16:40.152 "auth": { 00:16:40.152 "state": "completed", 00:16:40.152 "digest": "sha384", 00:16:40.152 "dhgroup": "ffdhe2048" 00:16:40.152 } 00:16:40.152 } 00:16:40.152 ]' 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.152 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.413 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.413 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.413 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.413 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.983 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.243 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.503 00:16:41.503 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.503 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.503 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.763 { 00:16:41.763 "cntlid": 65, 00:16:41.763 "qid": 0, 00:16:41.763 "state": "enabled", 00:16:41.763 "thread": "nvmf_tgt_poll_group_000", 00:16:41.763 "listen_address": { 00:16:41.763 "trtype": "TCP", 00:16:41.763 "adrfam": "IPv4", 00:16:41.763 "traddr": "10.0.0.2", 00:16:41.763 "trsvcid": "4420" 00:16:41.763 }, 00:16:41.763 "peer_address": { 00:16:41.763 "trtype": "TCP", 00:16:41.763 "adrfam": "IPv4", 00:16:41.763 "traddr": "10.0.0.1", 00:16:41.763 "trsvcid": "56936" 00:16:41.763 }, 00:16:41.763 "auth": { 00:16:41.763 "state": "completed", 00:16:41.763 "digest": "sha384", 00:16:41.763 "dhgroup": "ffdhe3072" 00:16:41.763 } 00:16:41.763 } 00:16:41.763 ]' 00:16:41.763 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.763 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.023 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.594 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.594 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.855 00:16:42.855 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.855 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.855 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.116 { 00:16:43.116 "cntlid": 67, 00:16:43.116 "qid": 0, 00:16:43.116 "state": "enabled", 00:16:43.116 "thread": "nvmf_tgt_poll_group_000", 00:16:43.116 "listen_address": { 00:16:43.116 "trtype": "TCP", 00:16:43.116 "adrfam": "IPv4", 00:16:43.116 "traddr": "10.0.0.2", 00:16:43.116 "trsvcid": "4420" 00:16:43.116 }, 00:16:43.116 "peer_address": { 00:16:43.116 "trtype": "TCP", 00:16:43.116 "adrfam": "IPv4", 00:16:43.116 "traddr": "10.0.0.1", 00:16:43.116 "trsvcid": "56954" 00:16:43.116 }, 00:16:43.116 "auth": { 00:16:43.116 "state": "completed", 00:16:43.116 "digest": "sha384", 00:16:43.116 "dhgroup": "ffdhe3072" 00:16:43.116 } 00:16:43.116 } 00:16:43.116 ]' 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.116 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.376 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.376 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.376 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.376 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.947 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.207 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.208 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.208 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.208 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.208 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.208 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.468 00:16:44.468 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.468 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.468 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.729 { 00:16:44.729 "cntlid": 69, 00:16:44.729 "qid": 0, 00:16:44.729 "state": "enabled", 00:16:44.729 "thread": "nvmf_tgt_poll_group_000", 00:16:44.729 "listen_address": { 00:16:44.729 "trtype": "TCP", 00:16:44.729 "adrfam": "IPv4", 00:16:44.729 "traddr": "10.0.0.2", 00:16:44.729 "trsvcid": "4420" 00:16:44.729 }, 00:16:44.729 "peer_address": { 00:16:44.729 "trtype": "TCP", 00:16:44.729 "adrfam": "IPv4", 00:16:44.729 "traddr": "10.0.0.1", 00:16:44.729 "trsvcid": "56974" 00:16:44.729 }, 00:16:44.729 "auth": { 00:16:44.729 "state": "completed", 00:16:44.729 "digest": "sha384", 00:16:44.729 "dhgroup": "ffdhe3072" 00:16:44.729 } 00:16:44.729 } 00:16:44.729 ]' 00:16:44.729 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.729 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.989 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:45.568 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.568 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.568 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.568 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.569 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.911 00:16:45.911 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.911 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.911 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.181 { 00:16:46.181 "cntlid": 71, 00:16:46.181 "qid": 0, 00:16:46.181 "state": "enabled", 00:16:46.181 "thread": "nvmf_tgt_poll_group_000", 00:16:46.181 "listen_address": { 00:16:46.181 "trtype": "TCP", 00:16:46.181 "adrfam": "IPv4", 00:16:46.181 "traddr": "10.0.0.2", 00:16:46.181 "trsvcid": "4420" 00:16:46.181 }, 00:16:46.181 "peer_address": { 00:16:46.181 "trtype": "TCP", 00:16:46.181 "adrfam": "IPv4", 00:16:46.181 "traddr": "10.0.0.1", 00:16:46.181 "trsvcid": "56998" 00:16:46.181 }, 00:16:46.181 "auth": { 00:16:46.181 "state": "completed", 00:16:46.181 "digest": "sha384", 00:16:46.181 "dhgroup": "ffdhe3072" 00:16:46.181 } 00:16:46.181 } 00:16:46.181 ]' 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.181 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.442 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.011 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.271 00:16:47.271 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.271 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.271 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.531 { 00:16:47.531 "cntlid": 73, 00:16:47.531 "qid": 0, 00:16:47.531 "state": "enabled", 00:16:47.531 "thread": "nvmf_tgt_poll_group_000", 00:16:47.531 "listen_address": { 00:16:47.531 "trtype": "TCP", 00:16:47.531 "adrfam": "IPv4", 00:16:47.531 "traddr": "10.0.0.2", 00:16:47.531 "trsvcid": "4420" 00:16:47.531 }, 00:16:47.531 "peer_address": { 00:16:47.531 "trtype": "TCP", 00:16:47.531 "adrfam": "IPv4", 00:16:47.531 "traddr": "10.0.0.1", 00:16:47.531 "trsvcid": "57032" 00:16:47.531 }, 00:16:47.531 "auth": { 00:16:47.531 "state": "completed", 00:16:47.531 "digest": "sha384", 00:16:47.531 "dhgroup": "ffdhe4096" 00:16:47.531 } 00:16:47.531 } 00:16:47.531 ]' 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.531 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.790 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.790 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.790 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.790 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.359 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.618 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.877 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.877 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.137 { 00:16:49.137 "cntlid": 75, 00:16:49.137 "qid": 0, 00:16:49.137 "state": "enabled", 00:16:49.137 "thread": "nvmf_tgt_poll_group_000", 00:16:49.137 "listen_address": { 00:16:49.137 "trtype": "TCP", 00:16:49.137 "adrfam": "IPv4", 00:16:49.137 "traddr": "10.0.0.2", 00:16:49.137 "trsvcid": "4420" 00:16:49.137 }, 00:16:49.137 "peer_address": { 00:16:49.137 "trtype": "TCP", 00:16:49.137 "adrfam": "IPv4", 00:16:49.137 "traddr": "10.0.0.1", 00:16:49.137 "trsvcid": "57064" 00:16:49.137 }, 00:16:49.137 "auth": { 00:16:49.137 "state": "completed", 00:16:49.137 "digest": "sha384", 00:16:49.137 "dhgroup": "ffdhe4096" 00:16:49.137 } 00:16:49.137 } 00:16:49.137 ]' 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.137 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.397 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.967 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.228 00:16:50.228 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.228 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.228 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.488 { 00:16:50.488 "cntlid": 77, 00:16:50.488 "qid": 0, 00:16:50.488 "state": "enabled", 00:16:50.488 "thread": "nvmf_tgt_poll_group_000", 00:16:50.488 "listen_address": { 00:16:50.488 "trtype": "TCP", 00:16:50.488 "adrfam": "IPv4", 00:16:50.488 "traddr": "10.0.0.2", 00:16:50.488 "trsvcid": "4420" 00:16:50.488 }, 00:16:50.488 "peer_address": { 00:16:50.488 "trtype": "TCP", 00:16:50.488 "adrfam": "IPv4", 00:16:50.488 "traddr": "10.0.0.1", 00:16:50.488 "trsvcid": "36952" 00:16:50.488 }, 00:16:50.488 "auth": { 00:16:50.488 "state": "completed", 00:16:50.488 "digest": "sha384", 00:16:50.488 "dhgroup": "ffdhe4096" 00:16:50.488 } 00:16:50.488 } 00:16:50.488 ]' 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.488 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.748 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.748 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.748 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.748 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.318 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.578 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.837 00:16:51.837 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.837 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.837 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.096 { 00:16:52.096 "cntlid": 79, 00:16:52.096 "qid": 0, 00:16:52.096 "state": "enabled", 00:16:52.096 "thread": "nvmf_tgt_poll_group_000", 00:16:52.096 "listen_address": { 00:16:52.096 "trtype": "TCP", 00:16:52.096 "adrfam": "IPv4", 00:16:52.096 "traddr": "10.0.0.2", 00:16:52.096 "trsvcid": "4420" 00:16:52.096 }, 00:16:52.096 "peer_address": { 00:16:52.096 "trtype": "TCP", 00:16:52.096 "adrfam": "IPv4", 00:16:52.096 "traddr": "10.0.0.1", 00:16:52.096 "trsvcid": "36978" 00:16:52.096 }, 00:16:52.096 "auth": { 00:16:52.096 "state": "completed", 00:16:52.096 "digest": "sha384", 00:16:52.096 "dhgroup": "ffdhe4096" 00:16:52.096 } 00:16:52.096 } 00:16:52.096 ]' 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.096 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.097 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.097 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.097 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.097 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.097 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.097 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.356 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.927 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.498 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.498 { 00:16:53.498 "cntlid": 81, 00:16:53.498 "qid": 0, 00:16:53.498 "state": "enabled", 00:16:53.498 "thread": "nvmf_tgt_poll_group_000", 00:16:53.498 "listen_address": { 00:16:53.498 "trtype": "TCP", 00:16:53.498 "adrfam": "IPv4", 00:16:53.498 "traddr": "10.0.0.2", 00:16:53.498 "trsvcid": "4420" 00:16:53.498 }, 00:16:53.498 "peer_address": { 00:16:53.498 "trtype": "TCP", 00:16:53.498 "adrfam": "IPv4", 00:16:53.498 "traddr": "10.0.0.1", 00:16:53.498 "trsvcid": "37008" 00:16:53.498 }, 00:16:53.498 "auth": { 00:16:53.498 "state": "completed", 00:16:53.498 "digest": "sha384", 00:16:53.498 "dhgroup": "ffdhe6144" 00:16:53.498 } 00:16:53.498 } 00:16:53.498 ]' 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.498 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.758 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.758 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.758 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.758 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.329 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.589 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.849 00:16:54.849 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.849 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.849 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.108 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.108 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.108 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.108 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.108 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.108 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.108 { 00:16:55.109 "cntlid": 83, 00:16:55.109 "qid": 0, 00:16:55.109 "state": "enabled", 00:16:55.109 "thread": "nvmf_tgt_poll_group_000", 00:16:55.109 "listen_address": { 00:16:55.109 "trtype": "TCP", 00:16:55.109 "adrfam": "IPv4", 00:16:55.109 "traddr": "10.0.0.2", 00:16:55.109 "trsvcid": "4420" 00:16:55.109 }, 00:16:55.109 "peer_address": { 00:16:55.109 "trtype": "TCP", 00:16:55.109 "adrfam": "IPv4", 00:16:55.109 "traddr": "10.0.0.1", 00:16:55.109 "trsvcid": "37034" 00:16:55.109 }, 00:16:55.109 "auth": { 00:16:55.109 "state": "completed", 00:16:55.109 "digest": "sha384", 00:16:55.109 "dhgroup": "ffdhe6144" 00:16:55.109 } 00:16:55.109 } 00:16:55.109 ]' 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.109 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.367 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.937 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.197 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.457 00:16:56.457 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.457 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.457 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.717 { 00:16:56.717 "cntlid": 85, 00:16:56.717 "qid": 0, 00:16:56.717 "state": "enabled", 00:16:56.717 "thread": "nvmf_tgt_poll_group_000", 00:16:56.717 "listen_address": { 00:16:56.717 "trtype": "TCP", 00:16:56.717 "adrfam": "IPv4", 00:16:56.717 "traddr": "10.0.0.2", 00:16:56.717 "trsvcid": "4420" 00:16:56.717 }, 00:16:56.717 "peer_address": { 00:16:56.717 "trtype": "TCP", 00:16:56.717 "adrfam": "IPv4", 00:16:56.717 "traddr": "10.0.0.1", 00:16:56.717 "trsvcid": "37058" 00:16:56.717 }, 00:16:56.717 "auth": { 00:16:56.717 "state": "completed", 00:16:56.717 "digest": "sha384", 00:16:56.717 "dhgroup": "ffdhe6144" 00:16:56.717 } 00:16:56.717 } 00:16:56.717 ]' 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.717 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.717 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.717 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.717 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.717 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.717 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.975 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.541 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.108 00:16:58.108 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.108 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.109 { 00:16:58.109 "cntlid": 87, 00:16:58.109 "qid": 0, 00:16:58.109 "state": "enabled", 00:16:58.109 "thread": "nvmf_tgt_poll_group_000", 00:16:58.109 "listen_address": { 00:16:58.109 "trtype": "TCP", 00:16:58.109 "adrfam": "IPv4", 00:16:58.109 "traddr": "10.0.0.2", 00:16:58.109 "trsvcid": "4420" 00:16:58.109 }, 00:16:58.109 "peer_address": { 00:16:58.109 "trtype": "TCP", 00:16:58.109 "adrfam": "IPv4", 00:16:58.109 "traddr": "10.0.0.1", 00:16:58.109 "trsvcid": "37076" 00:16:58.109 }, 00:16:58.109 "auth": { 00:16:58.109 "state": "completed", 00:16:58.109 "digest": "sha384", 00:16:58.109 "dhgroup": "ffdhe6144" 00:16:58.109 } 00:16:58.109 } 00:16:58.109 ]' 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.109 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.369 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.369 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.369 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.369 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.369 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.369 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.937 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.197 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.198 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.768 00:16:59.768 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.768 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.768 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.768 { 00:16:59.768 "cntlid": 89, 00:16:59.768 "qid": 0, 00:16:59.768 "state": "enabled", 00:16:59.768 "thread": "nvmf_tgt_poll_group_000", 00:16:59.768 "listen_address": { 00:16:59.768 "trtype": "TCP", 00:16:59.768 "adrfam": "IPv4", 00:16:59.768 "traddr": "10.0.0.2", 00:16:59.768 "trsvcid": "4420" 00:16:59.768 }, 00:16:59.768 "peer_address": { 00:16:59.768 "trtype": "TCP", 00:16:59.768 "adrfam": "IPv4", 00:16:59.768 "traddr": "10.0.0.1", 00:16:59.768 "trsvcid": "56904" 00:16:59.768 }, 00:16:59.768 "auth": { 00:16:59.768 "state": "completed", 00:16:59.768 "digest": "sha384", 00:16:59.768 "dhgroup": "ffdhe8192" 00:16:59.768 } 00:16:59.768 } 00:16:59.768 ]' 00:16:59.768 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.069 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.347 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:00.607 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.607 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.607 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.607 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.607 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.607 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.607 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.607 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.867 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.230 00:17:01.230 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.230 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.230 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.502 { 00:17:01.502 "cntlid": 91, 00:17:01.502 "qid": 0, 00:17:01.502 "state": "enabled", 00:17:01.502 "thread": "nvmf_tgt_poll_group_000", 00:17:01.502 "listen_address": { 00:17:01.502 "trtype": "TCP", 00:17:01.502 "adrfam": "IPv4", 00:17:01.502 "traddr": "10.0.0.2", 00:17:01.502 "trsvcid": "4420" 00:17:01.502 }, 00:17:01.502 "peer_address": { 00:17:01.502 "trtype": "TCP", 00:17:01.502 "adrfam": "IPv4", 00:17:01.502 "traddr": "10.0.0.1", 00:17:01.502 "trsvcid": "56932" 00:17:01.502 }, 00:17:01.502 "auth": { 00:17:01.502 "state": "completed", 00:17:01.502 "digest": "sha384", 00:17:01.502 "dhgroup": "ffdhe8192" 00:17:01.502 } 00:17:01.502 } 00:17:01.502 ]' 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.502 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.762 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.762 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.762 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.762 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.331 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.590 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.161 00:17:03.161 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.161 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.161 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.162 { 00:17:03.162 "cntlid": 93, 00:17:03.162 "qid": 0, 00:17:03.162 "state": "enabled", 00:17:03.162 "thread": "nvmf_tgt_poll_group_000", 00:17:03.162 "listen_address": { 00:17:03.162 "trtype": "TCP", 00:17:03.162 "adrfam": "IPv4", 00:17:03.162 "traddr": "10.0.0.2", 00:17:03.162 "trsvcid": "4420" 00:17:03.162 }, 00:17:03.162 "peer_address": { 00:17:03.162 "trtype": "TCP", 00:17:03.162 "adrfam": "IPv4", 00:17:03.162 "traddr": "10.0.0.1", 00:17:03.162 "trsvcid": "56952" 00:17:03.162 }, 00:17:03.162 "auth": { 00:17:03.162 "state": "completed", 00:17:03.162 "digest": "sha384", 00:17:03.162 "dhgroup": "ffdhe8192" 00:17:03.162 } 00:17:03.162 } 00:17:03.162 ]' 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.162 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.422 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.422 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.422 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.422 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.992 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.251 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.252 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.821 00:17:04.821 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.821 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.821 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.821 { 00:17:04.821 "cntlid": 95, 00:17:04.821 "qid": 0, 00:17:04.821 "state": "enabled", 00:17:04.821 "thread": "nvmf_tgt_poll_group_000", 00:17:04.821 "listen_address": { 00:17:04.821 "trtype": "TCP", 00:17:04.821 "adrfam": "IPv4", 00:17:04.821 "traddr": "10.0.0.2", 00:17:04.821 "trsvcid": "4420" 00:17:04.821 }, 00:17:04.821 "peer_address": { 00:17:04.821 "trtype": "TCP", 00:17:04.821 "adrfam": "IPv4", 00:17:04.821 "traddr": "10.0.0.1", 00:17:04.821 "trsvcid": "56992" 00:17:04.821 }, 00:17:04.821 "auth": { 00:17:04.821 "state": "completed", 00:17:04.821 "digest": "sha384", 00:17:04.821 "dhgroup": "ffdhe8192" 00:17:04.821 } 00:17:04.821 } 00:17:04.821 ]' 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.821 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.081 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.081 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.081 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.081 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.081 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.081 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.651 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.911 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.171 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.171 { 00:17:06.171 "cntlid": 97, 00:17:06.171 "qid": 0, 00:17:06.171 "state": "enabled", 00:17:06.171 "thread": "nvmf_tgt_poll_group_000", 00:17:06.171 "listen_address": { 00:17:06.171 "trtype": "TCP", 00:17:06.171 "adrfam": "IPv4", 00:17:06.171 "traddr": "10.0.0.2", 00:17:06.171 "trsvcid": "4420" 00:17:06.171 }, 00:17:06.171 "peer_address": { 00:17:06.171 "trtype": "TCP", 00:17:06.171 "adrfam": "IPv4", 00:17:06.171 "traddr": "10.0.0.1", 00:17:06.171 "trsvcid": "57016" 00:17:06.171 }, 00:17:06.171 "auth": { 00:17:06.171 "state": "completed", 00:17:06.171 "digest": "sha512", 00:17:06.171 "dhgroup": "null" 00:17:06.171 } 00:17:06.171 } 00:17:06.171 ]' 00:17:06.171 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.431 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.691 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.261 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.521 00:17:07.521 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.521 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.521 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.781 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.781 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.781 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.781 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.781 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.781 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.781 { 00:17:07.781 "cntlid": 99, 00:17:07.781 "qid": 0, 00:17:07.781 "state": "enabled", 00:17:07.781 "thread": "nvmf_tgt_poll_group_000", 00:17:07.781 "listen_address": { 00:17:07.781 "trtype": "TCP", 00:17:07.781 "adrfam": "IPv4", 00:17:07.782 "traddr": "10.0.0.2", 00:17:07.782 "trsvcid": "4420" 00:17:07.782 }, 00:17:07.782 "peer_address": { 00:17:07.782 "trtype": "TCP", 00:17:07.782 "adrfam": "IPv4", 00:17:07.782 "traddr": "10.0.0.1", 00:17:07.782 "trsvcid": "57050" 00:17:07.782 }, 00:17:07.782 "auth": { 00:17:07.782 "state": "completed", 00:17:07.782 "digest": "sha512", 00:17:07.782 "dhgroup": "null" 00:17:07.782 } 00:17:07.782 } 00:17:07.782 ]' 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.782 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.042 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:08.614 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.614 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.874 00:17:08.874 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.874 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.874 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.134 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.134 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.134 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.134 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.134 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.134 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.134 { 00:17:09.134 "cntlid": 101, 00:17:09.134 "qid": 0, 00:17:09.134 "state": "enabled", 00:17:09.134 "thread": "nvmf_tgt_poll_group_000", 00:17:09.134 "listen_address": { 00:17:09.134 "trtype": "TCP", 00:17:09.134 "adrfam": "IPv4", 00:17:09.134 "traddr": "10.0.0.2", 00:17:09.134 "trsvcid": "4420" 00:17:09.134 }, 00:17:09.134 "peer_address": { 00:17:09.134 "trtype": "TCP", 00:17:09.134 "adrfam": "IPv4", 00:17:09.134 "traddr": "10.0.0.1", 00:17:09.134 "trsvcid": "57078" 00:17:09.134 }, 00:17:09.134 "auth": { 00:17:09.134 "state": "completed", 00:17:09.134 "digest": "sha512", 00:17:09.134 "dhgroup": "null" 00:17:09.134 } 00:17:09.134 } 00:17:09.134 ]' 00:17:09.135 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.135 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.135 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.135 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.135 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.394 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.394 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.394 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.394 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.963 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:10.223 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.224 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.224 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.224 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.224 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.484 00:17:10.484 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.484 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.484 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.744 { 00:17:10.744 "cntlid": 103, 00:17:10.744 "qid": 0, 00:17:10.744 "state": "enabled", 00:17:10.744 "thread": "nvmf_tgt_poll_group_000", 00:17:10.744 "listen_address": { 00:17:10.744 "trtype": "TCP", 00:17:10.744 "adrfam": "IPv4", 00:17:10.744 "traddr": "10.0.0.2", 00:17:10.744 "trsvcid": "4420" 00:17:10.744 }, 00:17:10.744 "peer_address": { 00:17:10.744 "trtype": "TCP", 00:17:10.744 "adrfam": "IPv4", 00:17:10.744 "traddr": "10.0.0.1", 00:17:10.744 "trsvcid": "49288" 00:17:10.744 }, 00:17:10.744 "auth": { 00:17:10.744 "state": "completed", 00:17:10.744 "digest": "sha512", 00:17:10.744 "dhgroup": "null" 00:17:10.744 } 00:17:10.744 } 00:17:10.744 ]' 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.744 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.744 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.744 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.744 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.744 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.744 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.004 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.575 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.835 00:17:11.835 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.835 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.835 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.096 { 00:17:12.096 "cntlid": 105, 00:17:12.096 "qid": 0, 00:17:12.096 "state": "enabled", 00:17:12.096 "thread": "nvmf_tgt_poll_group_000", 00:17:12.096 "listen_address": { 00:17:12.096 "trtype": "TCP", 00:17:12.096 "adrfam": "IPv4", 00:17:12.096 "traddr": "10.0.0.2", 00:17:12.096 "trsvcid": "4420" 00:17:12.096 }, 00:17:12.096 "peer_address": { 00:17:12.096 "trtype": "TCP", 00:17:12.096 "adrfam": "IPv4", 00:17:12.096 "traddr": "10.0.0.1", 00:17:12.096 "trsvcid": "49330" 00:17:12.096 }, 00:17:12.096 "auth": { 00:17:12.096 "state": "completed", 00:17:12.096 "digest": "sha512", 00:17:12.096 "dhgroup": "ffdhe2048" 00:17:12.096 } 00:17:12.096 } 00:17:12.096 ]' 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.096 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.356 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.927 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.187 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.187 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.448 { 00:17:13.448 "cntlid": 107, 00:17:13.448 "qid": 0, 00:17:13.448 "state": "enabled", 00:17:13.448 "thread": "nvmf_tgt_poll_group_000", 00:17:13.448 "listen_address": { 00:17:13.448 "trtype": "TCP", 00:17:13.448 "adrfam": "IPv4", 00:17:13.448 "traddr": "10.0.0.2", 00:17:13.448 "trsvcid": "4420" 00:17:13.448 }, 00:17:13.448 "peer_address": { 00:17:13.448 "trtype": "TCP", 00:17:13.448 "adrfam": "IPv4", 00:17:13.448 "traddr": "10.0.0.1", 00:17:13.448 "trsvcid": "49356" 00:17:13.448 }, 00:17:13.448 "auth": { 00:17:13.448 "state": "completed", 00:17:13.448 "digest": "sha512", 00:17:13.448 "dhgroup": "ffdhe2048" 00:17:13.448 } 00:17:13.448 } 00:17:13.448 ]' 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.448 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.708 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.708 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.708 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.708 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.708 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.709 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.279 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.539 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.540 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.800 00:17:14.800 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.800 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.800 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.061 { 00:17:15.061 "cntlid": 109, 00:17:15.061 "qid": 0, 00:17:15.061 "state": "enabled", 00:17:15.061 "thread": "nvmf_tgt_poll_group_000", 00:17:15.061 "listen_address": { 00:17:15.061 "trtype": "TCP", 00:17:15.061 "adrfam": "IPv4", 00:17:15.061 "traddr": "10.0.0.2", 00:17:15.061 "trsvcid": "4420" 00:17:15.061 }, 00:17:15.061 "peer_address": { 00:17:15.061 "trtype": "TCP", 00:17:15.061 "adrfam": "IPv4", 00:17:15.061 "traddr": "10.0.0.1", 00:17:15.061 "trsvcid": "49384" 00:17:15.061 }, 00:17:15.061 "auth": { 00:17:15.061 "state": "completed", 00:17:15.061 "digest": "sha512", 00:17:15.061 "dhgroup": "ffdhe2048" 00:17:15.061 } 00:17:15.061 } 00:17:15.061 ]' 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.061 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.321 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.893 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.154 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.155 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.155 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.155 00:17:16.155 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.155 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.155 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.414 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.414 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.414 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.414 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.414 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.414 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.414 { 00:17:16.414 "cntlid": 111, 00:17:16.414 "qid": 0, 00:17:16.414 "state": "enabled", 00:17:16.414 "thread": "nvmf_tgt_poll_group_000", 00:17:16.414 "listen_address": { 00:17:16.414 "trtype": "TCP", 00:17:16.414 "adrfam": "IPv4", 00:17:16.414 "traddr": "10.0.0.2", 00:17:16.414 "trsvcid": "4420" 00:17:16.414 }, 00:17:16.414 "peer_address": { 00:17:16.414 "trtype": "TCP", 00:17:16.414 "adrfam": "IPv4", 00:17:16.414 "traddr": "10.0.0.1", 00:17:16.414 "trsvcid": "49408" 00:17:16.414 }, 00:17:16.414 "auth": { 00:17:16.415 "state": "completed", 00:17:16.415 "digest": "sha512", 00:17:16.415 "dhgroup": "ffdhe2048" 00:17:16.415 } 00:17:16.415 } 00:17:16.415 ]' 00:17:16.415 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.415 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.415 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.675 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.675 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.675 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.675 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.675 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.675 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.245 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.505 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.766 00:17:17.766 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.766 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.766 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.026 { 00:17:18.026 "cntlid": 113, 00:17:18.026 "qid": 0, 00:17:18.026 "state": "enabled", 00:17:18.026 "thread": "nvmf_tgt_poll_group_000", 00:17:18.026 "listen_address": { 00:17:18.026 "trtype": "TCP", 00:17:18.026 "adrfam": "IPv4", 00:17:18.026 "traddr": "10.0.0.2", 00:17:18.026 "trsvcid": "4420" 00:17:18.026 }, 00:17:18.026 "peer_address": { 00:17:18.026 "trtype": "TCP", 00:17:18.026 "adrfam": "IPv4", 00:17:18.026 "traddr": "10.0.0.1", 00:17:18.026 "trsvcid": "49442" 00:17:18.026 }, 00:17:18.026 "auth": { 00:17:18.026 "state": "completed", 00:17:18.026 "digest": "sha512", 00:17:18.026 "dhgroup": "ffdhe3072" 00:17:18.026 } 00:17:18.026 } 00:17:18.026 ]' 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.026 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.027 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.027 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.027 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.027 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.027 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.027 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.286 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.857 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.117 00:17:19.117 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.117 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.117 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.377 { 00:17:19.377 "cntlid": 115, 00:17:19.377 "qid": 0, 00:17:19.377 "state": "enabled", 00:17:19.377 "thread": "nvmf_tgt_poll_group_000", 00:17:19.377 "listen_address": { 00:17:19.377 "trtype": "TCP", 00:17:19.377 "adrfam": "IPv4", 00:17:19.377 "traddr": "10.0.0.2", 00:17:19.377 "trsvcid": "4420" 00:17:19.377 }, 00:17:19.377 "peer_address": { 00:17:19.377 "trtype": "TCP", 00:17:19.377 "adrfam": "IPv4", 00:17:19.377 "traddr": "10.0.0.1", 00:17:19.377 "trsvcid": "34644" 00:17:19.377 }, 00:17:19.377 "auth": { 00:17:19.377 "state": "completed", 00:17:19.377 "digest": "sha512", 00:17:19.377 "dhgroup": "ffdhe3072" 00:17:19.377 } 00:17:19.377 } 00:17:19.377 ]' 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.377 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.637 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.637 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.637 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.637 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.206 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.466 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.727 00:17:20.727 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.727 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.727 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.727 { 00:17:20.727 "cntlid": 117, 00:17:20.727 "qid": 0, 00:17:20.727 "state": "enabled", 00:17:20.727 "thread": "nvmf_tgt_poll_group_000", 00:17:20.727 "listen_address": { 00:17:20.727 "trtype": "TCP", 00:17:20.727 "adrfam": "IPv4", 00:17:20.727 "traddr": "10.0.0.2", 00:17:20.727 "trsvcid": "4420" 00:17:20.727 }, 00:17:20.727 "peer_address": { 00:17:20.727 "trtype": "TCP", 00:17:20.727 "adrfam": "IPv4", 00:17:20.727 "traddr": "10.0.0.1", 00:17:20.727 "trsvcid": "34676" 00:17:20.727 }, 00:17:20.727 "auth": { 00:17:20.727 "state": "completed", 00:17:20.727 "digest": "sha512", 00:17:20.727 "dhgroup": "ffdhe3072" 00:17:20.727 } 00:17:20.727 } 00:17:20.727 ]' 00:17:20.727 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.987 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.247 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.817 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.817 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.077 00:17:22.077 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.077 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.077 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.338 { 00:17:22.338 "cntlid": 119, 00:17:22.338 "qid": 0, 00:17:22.338 "state": "enabled", 00:17:22.338 "thread": "nvmf_tgt_poll_group_000", 00:17:22.338 "listen_address": { 00:17:22.338 "trtype": "TCP", 00:17:22.338 "adrfam": "IPv4", 00:17:22.338 "traddr": "10.0.0.2", 00:17:22.338 "trsvcid": "4420" 00:17:22.338 }, 00:17:22.338 "peer_address": { 00:17:22.338 "trtype": "TCP", 00:17:22.338 "adrfam": "IPv4", 00:17:22.338 "traddr": "10.0.0.1", 00:17:22.338 "trsvcid": "34698" 00:17:22.338 }, 00:17:22.338 "auth": { 00:17:22.338 "state": "completed", 00:17:22.338 "digest": "sha512", 00:17:22.338 "dhgroup": "ffdhe3072" 00:17:22.338 } 00:17:22.338 } 00:17:22.338 ]' 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.338 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.599 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.169 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.429 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.430 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.430 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.430 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.430 00:17:23.690 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.690 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.690 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.690 { 00:17:23.690 "cntlid": 121, 00:17:23.690 "qid": 0, 00:17:23.690 "state": "enabled", 00:17:23.690 "thread": "nvmf_tgt_poll_group_000", 00:17:23.690 "listen_address": { 00:17:23.690 "trtype": "TCP", 00:17:23.690 "adrfam": "IPv4", 00:17:23.690 "traddr": "10.0.0.2", 00:17:23.690 "trsvcid": "4420" 00:17:23.690 }, 00:17:23.690 "peer_address": { 00:17:23.690 "trtype": "TCP", 00:17:23.690 "adrfam": "IPv4", 00:17:23.690 "traddr": "10.0.0.1", 00:17:23.690 "trsvcid": "34734" 00:17:23.690 }, 00:17:23.690 "auth": { 00:17:23.690 "state": "completed", 00:17:23.690 "digest": "sha512", 00:17:23.690 "dhgroup": "ffdhe4096" 00:17:23.690 } 00:17:23.690 } 00:17:23.690 ]' 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.690 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.950 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.950 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.950 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.950 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.950 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.950 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.520 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.780 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.040 00:17:25.040 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.040 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.040 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.300 { 00:17:25.300 "cntlid": 123, 00:17:25.300 "qid": 0, 00:17:25.300 "state": "enabled", 00:17:25.300 "thread": "nvmf_tgt_poll_group_000", 00:17:25.300 "listen_address": { 00:17:25.300 "trtype": "TCP", 00:17:25.300 "adrfam": "IPv4", 00:17:25.300 "traddr": "10.0.0.2", 00:17:25.300 "trsvcid": "4420" 00:17:25.300 }, 00:17:25.300 "peer_address": { 00:17:25.300 "trtype": "TCP", 00:17:25.300 "adrfam": "IPv4", 00:17:25.300 "traddr": "10.0.0.1", 00:17:25.300 "trsvcid": "34774" 00:17:25.300 }, 00:17:25.300 "auth": { 00:17:25.300 "state": "completed", 00:17:25.300 "digest": "sha512", 00:17:25.300 "dhgroup": "ffdhe4096" 00:17:25.300 } 00:17:25.300 } 00:17:25.300 ]' 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.300 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.301 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.301 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.301 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.301 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.560 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.130 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.390 00:17:26.390 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.390 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.390 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.650 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.650 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.650 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.650 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.650 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.650 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.650 { 00:17:26.650 "cntlid": 125, 00:17:26.650 "qid": 0, 00:17:26.650 "state": "enabled", 00:17:26.650 "thread": "nvmf_tgt_poll_group_000", 00:17:26.650 "listen_address": { 00:17:26.650 "trtype": "TCP", 00:17:26.650 "adrfam": "IPv4", 00:17:26.650 "traddr": "10.0.0.2", 00:17:26.650 "trsvcid": "4420" 00:17:26.650 }, 00:17:26.650 "peer_address": { 00:17:26.650 "trtype": "TCP", 00:17:26.650 "adrfam": "IPv4", 00:17:26.650 "traddr": "10.0.0.1", 00:17:26.650 "trsvcid": "34786" 00:17:26.650 }, 00:17:26.650 "auth": { 00:17:26.650 "state": "completed", 00:17:26.650 "digest": "sha512", 00:17:26.650 "dhgroup": "ffdhe4096" 00:17:26.650 } 00:17:26.650 } 00:17:26.650 ]' 00:17:26.650 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.650 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.650 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.910 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.910 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.910 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.910 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.910 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.910 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.480 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.742 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.036 00:17:28.036 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.036 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.036 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.337 { 00:17:28.337 "cntlid": 127, 00:17:28.337 "qid": 0, 00:17:28.337 "state": "enabled", 00:17:28.337 "thread": "nvmf_tgt_poll_group_000", 00:17:28.337 "listen_address": { 00:17:28.337 "trtype": "TCP", 00:17:28.337 "adrfam": "IPv4", 00:17:28.337 "traddr": "10.0.0.2", 00:17:28.337 "trsvcid": "4420" 00:17:28.337 }, 00:17:28.337 "peer_address": { 00:17:28.337 "trtype": "TCP", 00:17:28.337 "adrfam": "IPv4", 00:17:28.337 "traddr": "10.0.0.1", 00:17:28.337 "trsvcid": "34800" 00:17:28.337 }, 00:17:28.337 "auth": { 00:17:28.337 "state": "completed", 00:17:28.337 "digest": "sha512", 00:17:28.337 "dhgroup": "ffdhe4096" 00:17:28.337 } 00:17:28.337 } 00:17:28.337 ]' 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.337 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.597 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.167 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.426 00:17:29.686 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.686 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.686 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.686 { 00:17:29.686 "cntlid": 129, 00:17:29.686 "qid": 0, 00:17:29.686 "state": "enabled", 00:17:29.686 "thread": "nvmf_tgt_poll_group_000", 00:17:29.686 "listen_address": { 00:17:29.686 "trtype": "TCP", 00:17:29.686 "adrfam": "IPv4", 00:17:29.686 "traddr": "10.0.0.2", 00:17:29.686 "trsvcid": "4420" 00:17:29.686 }, 00:17:29.686 "peer_address": { 00:17:29.686 "trtype": "TCP", 00:17:29.686 "adrfam": "IPv4", 00:17:29.686 "traddr": "10.0.0.1", 00:17:29.686 "trsvcid": "56596" 00:17:29.686 }, 00:17:29.686 "auth": { 00:17:29.686 "state": "completed", 00:17:29.686 "digest": "sha512", 00:17:29.686 "dhgroup": "ffdhe6144" 00:17:29.686 } 00:17:29.686 } 00:17:29.686 ]' 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.686 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.947 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.947 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.947 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.947 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.947 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.947 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.517 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.777 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.037 00:17:31.037 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.037 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.037 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.297 { 00:17:31.297 "cntlid": 131, 00:17:31.297 "qid": 0, 00:17:31.297 "state": "enabled", 00:17:31.297 "thread": "nvmf_tgt_poll_group_000", 00:17:31.297 "listen_address": { 00:17:31.297 "trtype": "TCP", 00:17:31.297 "adrfam": "IPv4", 00:17:31.297 "traddr": "10.0.0.2", 00:17:31.297 "trsvcid": "4420" 00:17:31.297 }, 00:17:31.297 "peer_address": { 00:17:31.297 "trtype": "TCP", 00:17:31.297 "adrfam": "IPv4", 00:17:31.297 "traddr": "10.0.0.1", 00:17:31.297 "trsvcid": "56642" 00:17:31.297 }, 00:17:31.297 "auth": { 00:17:31.297 "state": "completed", 00:17:31.297 "digest": "sha512", 00:17:31.297 "dhgroup": "ffdhe6144" 00:17:31.297 } 00:17:31.297 } 00:17:31.297 ]' 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.297 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.557 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.127 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.387 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.387 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.387 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.387 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.387 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.387 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.647 00:17:32.647 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.647 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.647 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.907 { 00:17:32.907 "cntlid": 133, 00:17:32.907 "qid": 0, 00:17:32.907 "state": "enabled", 00:17:32.907 "thread": "nvmf_tgt_poll_group_000", 00:17:32.907 "listen_address": { 00:17:32.907 "trtype": "TCP", 00:17:32.907 "adrfam": "IPv4", 00:17:32.907 "traddr": "10.0.0.2", 00:17:32.907 "trsvcid": "4420" 00:17:32.907 }, 00:17:32.907 "peer_address": { 00:17:32.907 "trtype": "TCP", 00:17:32.907 "adrfam": "IPv4", 00:17:32.907 "traddr": "10.0.0.1", 00:17:32.907 "trsvcid": "56662" 00:17:32.907 }, 00:17:32.907 "auth": { 00:17:32.907 "state": "completed", 00:17:32.907 "digest": "sha512", 00:17:32.907 "dhgroup": "ffdhe6144" 00:17:32.907 } 00:17:32.907 } 00:17:32.907 ]' 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.907 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.166 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.735 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.735 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.305 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.305 { 00:17:34.305 "cntlid": 135, 00:17:34.305 "qid": 0, 00:17:34.305 "state": "enabled", 00:17:34.305 "thread": "nvmf_tgt_poll_group_000", 00:17:34.305 "listen_address": { 00:17:34.305 "trtype": "TCP", 00:17:34.305 "adrfam": "IPv4", 00:17:34.305 "traddr": "10.0.0.2", 00:17:34.305 "trsvcid": "4420" 00:17:34.305 }, 00:17:34.305 "peer_address": { 00:17:34.305 "trtype": "TCP", 00:17:34.305 "adrfam": "IPv4", 00:17:34.305 "traddr": "10.0.0.1", 00:17:34.305 "trsvcid": "56688" 00:17:34.305 }, 00:17:34.305 "auth": { 00:17:34.305 "state": "completed", 00:17:34.305 "digest": "sha512", 00:17:34.305 "dhgroup": "ffdhe6144" 00:17:34.305 } 00:17:34.305 } 00:17:34.305 ]' 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.305 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.565 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.565 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.565 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.565 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.158 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.418 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.998 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.999 { 00:17:35.999 "cntlid": 137, 00:17:35.999 "qid": 0, 00:17:35.999 "state": "enabled", 00:17:35.999 "thread": "nvmf_tgt_poll_group_000", 00:17:35.999 "listen_address": { 00:17:35.999 "trtype": "TCP", 00:17:35.999 "adrfam": "IPv4", 00:17:35.999 "traddr": "10.0.0.2", 00:17:35.999 "trsvcid": "4420" 00:17:35.999 }, 00:17:35.999 "peer_address": { 00:17:35.999 "trtype": "TCP", 00:17:35.999 "adrfam": "IPv4", 00:17:35.999 "traddr": "10.0.0.1", 00:17:35.999 "trsvcid": "56710" 00:17:35.999 }, 00:17:35.999 "auth": { 00:17:35.999 "state": "completed", 00:17:35.999 "digest": "sha512", 00:17:35.999 "dhgroup": "ffdhe8192" 00:17:35.999 } 00:17:35.999 } 00:17:35.999 ]' 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.999 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.261 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.261 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.261 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.261 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.831 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.090 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.659 00:17:37.659 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.659 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.659 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.659 { 00:17:37.659 "cntlid": 139, 00:17:37.659 "qid": 0, 00:17:37.659 "state": "enabled", 00:17:37.659 "thread": "nvmf_tgt_poll_group_000", 00:17:37.659 "listen_address": { 00:17:37.659 "trtype": "TCP", 00:17:37.659 "adrfam": "IPv4", 00:17:37.659 "traddr": "10.0.0.2", 00:17:37.659 "trsvcid": "4420" 00:17:37.659 }, 00:17:37.659 "peer_address": { 00:17:37.659 "trtype": "TCP", 00:17:37.659 "adrfam": "IPv4", 00:17:37.659 "traddr": "10.0.0.1", 00:17:37.659 "trsvcid": "56738" 00:17:37.659 }, 00:17:37.659 "auth": { 00:17:37.659 "state": "completed", 00:17:37.659 "digest": "sha512", 00:17:37.659 "dhgroup": "ffdhe8192" 00:17:37.659 } 00:17:37.659 } 00:17:37.659 ]' 00:17:37.659 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.919 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.178 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjE3OWFjNmJkM2JhOTA3OWUxZTY4NjJjMTZmYjM3MWIjdNi0: --dhchap-ctrl-secret DHHC-1:02:OTIxMWNjZWIzNmNjN2E0MWVhNDYzNDgxM2FlNjJlMjkyMmQyNGQ1ZGE0NzdkNzdhhnc8Aw==: 00:17:38.747 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.747 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.747 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.747 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.748 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.748 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.748 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.748 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.316 00:17:39.316 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.316 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.316 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.576 { 00:17:39.576 "cntlid": 141, 00:17:39.576 "qid": 0, 00:17:39.576 "state": "enabled", 00:17:39.576 "thread": "nvmf_tgt_poll_group_000", 00:17:39.576 "listen_address": { 00:17:39.576 "trtype": "TCP", 00:17:39.576 "adrfam": "IPv4", 00:17:39.576 "traddr": "10.0.0.2", 00:17:39.576 "trsvcid": "4420" 00:17:39.576 }, 00:17:39.576 "peer_address": { 00:17:39.576 "trtype": "TCP", 00:17:39.576 "adrfam": "IPv4", 00:17:39.576 "traddr": "10.0.0.1", 00:17:39.576 "trsvcid": "56766" 00:17:39.576 }, 00:17:39.576 "auth": { 00:17:39.576 "state": "completed", 00:17:39.576 "digest": "sha512", 00:17:39.576 "dhgroup": "ffdhe8192" 00:17:39.576 } 00:17:39.576 } 00:17:39.576 ]' 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.576 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.836 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTRiYTUwZmRjYWM0ZmNhNzI3MjRlMzY2NTVjMjMxMzFlZTYxNDc1YmNmOTk2ZDM1jujlQA==: --dhchap-ctrl-secret DHHC-1:01:OTJkODg5NjEwNjU2ODY3YWExYjk2YzJiN2UwZTEyMTSGEAw5: 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.406 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.022 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.022 { 00:17:41.022 "cntlid": 143, 00:17:41.022 "qid": 0, 00:17:41.022 "state": "enabled", 00:17:41.022 "thread": "nvmf_tgt_poll_group_000", 00:17:41.022 "listen_address": { 00:17:41.022 "trtype": "TCP", 00:17:41.022 "adrfam": "IPv4", 00:17:41.022 "traddr": "10.0.0.2", 00:17:41.022 "trsvcid": "4420" 00:17:41.022 }, 00:17:41.022 "peer_address": { 00:17:41.022 "trtype": "TCP", 00:17:41.022 "adrfam": "IPv4", 00:17:41.022 "traddr": "10.0.0.1", 00:17:41.022 "trsvcid": "35772" 00:17:41.022 }, 00:17:41.022 "auth": { 00:17:41.022 "state": "completed", 00:17:41.022 "digest": "sha512", 00:17:41.022 "dhgroup": "ffdhe8192" 00:17:41.022 } 00:17:41.022 } 00:17:41.022 ]' 00:17:41.022 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.282 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.542 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:41.802 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.062 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.633 00:17:42.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.893 { 00:17:42.893 "cntlid": 145, 00:17:42.893 "qid": 0, 00:17:42.893 "state": "enabled", 00:17:42.893 "thread": "nvmf_tgt_poll_group_000", 00:17:42.893 "listen_address": { 00:17:42.893 "trtype": "TCP", 00:17:42.893 "adrfam": "IPv4", 00:17:42.893 "traddr": "10.0.0.2", 00:17:42.893 "trsvcid": "4420" 00:17:42.893 }, 00:17:42.893 "peer_address": { 00:17:42.893 "trtype": "TCP", 00:17:42.893 "adrfam": "IPv4", 00:17:42.893 "traddr": "10.0.0.1", 00:17:42.893 "trsvcid": "35800" 00:17:42.893 }, 00:17:42.893 "auth": { 00:17:42.893 "state": "completed", 00:17:42.893 "digest": "sha512", 00:17:42.893 "dhgroup": "ffdhe8192" 00:17:42.893 } 00:17:42.893 } 00:17:42.893 ]' 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.893 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.153 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTBkYzU4NzJiNjBjY2U1NTc1NTI5M2U4OGU1YmQ4ZmU3N2E5MDc2YjhmODUzMjg5JDjO3w==: --dhchap-ctrl-secret DHHC-1:03:ZDMxYmZlMTc1YTJmMTZkYjQ0ZjhkYTFjMDJjYTJjY2Q3MmI3ZTM0ZjM0YjExYjAzMTQ2MTQ4MGMwNmU2MTg5MU7s0UI=: 00:17:43.722 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.723 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.983 request: 00:17:43.983 { 00:17:43.983 "name": "nvme0", 00:17:43.983 "trtype": "tcp", 00:17:43.983 "traddr": "10.0.0.2", 00:17:43.983 "adrfam": "ipv4", 00:17:43.983 "trsvcid": "4420", 00:17:43.983 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.983 "prchk_reftag": false, 00:17:43.983 "prchk_guard": false, 00:17:43.983 "hdgst": false, 00:17:43.983 "ddgst": false, 00:17:43.983 "dhchap_key": "key2", 00:17:43.983 "method": "bdev_nvme_attach_controller", 00:17:43.983 "req_id": 1 00:17:43.983 } 00:17:43.983 Got JSON-RPC error response 00:17:43.983 response: 00:17:43.983 { 00:17:43.983 "code": -5, 00:17:43.983 "message": "Input/output error" 00:17:43.983 } 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.983 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.984 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:44.554 request: 00:17:44.554 { 00:17:44.554 "name": "nvme0", 00:17:44.554 "trtype": "tcp", 00:17:44.554 "traddr": "10.0.0.2", 00:17:44.554 "adrfam": "ipv4", 00:17:44.554 "trsvcid": "4420", 00:17:44.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.554 "prchk_reftag": false, 00:17:44.554 "prchk_guard": false, 00:17:44.554 "hdgst": false, 00:17:44.554 "ddgst": false, 00:17:44.554 "dhchap_key": "key1", 00:17:44.554 "dhchap_ctrlr_key": "ckey2", 00:17:44.554 "method": "bdev_nvme_attach_controller", 00:17:44.554 "req_id": 1 00:17:44.554 } 00:17:44.554 Got JSON-RPC error response 00:17:44.554 response: 00:17:44.554 { 00:17:44.554 "code": -5, 00:17:44.554 "message": "Input/output error" 00:17:44.554 } 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.554 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.124 request: 00:17:45.124 { 00:17:45.124 "name": "nvme0", 00:17:45.124 "trtype": "tcp", 00:17:45.124 "traddr": "10.0.0.2", 00:17:45.124 "adrfam": "ipv4", 00:17:45.124 "trsvcid": "4420", 00:17:45.124 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:45.124 "prchk_reftag": false, 00:17:45.124 "prchk_guard": false, 00:17:45.124 "hdgst": false, 00:17:45.124 "ddgst": false, 00:17:45.124 "dhchap_key": "key1", 00:17:45.124 "dhchap_ctrlr_key": "ckey1", 00:17:45.124 "method": "bdev_nvme_attach_controller", 00:17:45.124 "req_id": 1 00:17:45.124 } 00:17:45.124 Got JSON-RPC error response 00:17:45.124 response: 00:17:45.124 { 00:17:45.124 "code": -5, 00:17:45.124 "message": "Input/output error" 00:17:45.124 } 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2954487 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2954487 ']' 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2954487 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2954487 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2954487' 00:17:45.124 killing process with pid 2954487 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2954487 00:17:45.124 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2954487 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2974608 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2974608 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2974608 ']' 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.125 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2974608 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2974608 ']' 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.064 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.324 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.892 00:17:46.892 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.892 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.892 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.152 { 00:17:47.152 "cntlid": 1, 00:17:47.152 "qid": 0, 00:17:47.152 "state": "enabled", 00:17:47.152 "thread": "nvmf_tgt_poll_group_000", 00:17:47.152 "listen_address": { 00:17:47.152 "trtype": "TCP", 00:17:47.152 "adrfam": "IPv4", 00:17:47.152 "traddr": "10.0.0.2", 00:17:47.152 "trsvcid": "4420" 00:17:47.152 }, 00:17:47.152 "peer_address": { 00:17:47.152 "trtype": "TCP", 00:17:47.152 "adrfam": "IPv4", 00:17:47.152 "traddr": "10.0.0.1", 00:17:47.152 "trsvcid": "35846" 00:17:47.152 }, 00:17:47.152 "auth": { 00:17:47.152 "state": "completed", 00:17:47.152 "digest": "sha512", 00:17:47.152 "dhgroup": "ffdhe8192" 00:17:47.152 } 00:17:47.152 } 00:17:47.152 ]' 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.412 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE5MTI2YjI5OGFmNzIwMTczZTU5M2IzOTgwOWQ4YzA3ODZlNGZhYTc0MTgxNDYzYjhlMDg3Mjk1NmMxZGFlYYq7ZUs=: 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.983 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.243 request: 00:17:48.243 { 00:17:48.243 "name": "nvme0", 00:17:48.243 "trtype": "tcp", 00:17:48.243 "traddr": "10.0.0.2", 00:17:48.243 "adrfam": "ipv4", 00:17:48.243 "trsvcid": "4420", 00:17:48.243 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:48.243 "prchk_reftag": false, 00:17:48.243 "prchk_guard": false, 00:17:48.243 "hdgst": false, 00:17:48.243 "ddgst": false, 00:17:48.243 "dhchap_key": "key3", 00:17:48.243 "method": "bdev_nvme_attach_controller", 00:17:48.243 "req_id": 1 00:17:48.243 } 00:17:48.243 Got JSON-RPC error response 00:17:48.243 response: 00:17:48.243 { 00:17:48.243 "code": -5, 00:17:48.243 "message": "Input/output error" 00:17:48.243 } 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:48.243 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.503 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.763 request: 00:17:48.763 { 00:17:48.763 "name": "nvme0", 00:17:48.763 "trtype": "tcp", 00:17:48.763 "traddr": "10.0.0.2", 00:17:48.763 "adrfam": "ipv4", 00:17:48.763 "trsvcid": "4420", 00:17:48.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:48.763 "prchk_reftag": false, 00:17:48.763 "prchk_guard": false, 00:17:48.763 "hdgst": false, 00:17:48.763 "ddgst": false, 00:17:48.763 "dhchap_key": "key3", 00:17:48.763 "method": "bdev_nvme_attach_controller", 00:17:48.763 "req_id": 1 00:17:48.763 } 00:17:48.763 Got JSON-RPC error response 00:17:48.763 response: 00:17:48.763 { 00:17:48.763 "code": -5, 00:17:48.763 "message": "Input/output error" 00:17:48.763 } 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.763 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.763 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.764 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:49.024 request: 00:17:49.024 { 00:17:49.024 "name": "nvme0", 00:17:49.024 "trtype": "tcp", 00:17:49.024 "traddr": "10.0.0.2", 00:17:49.024 "adrfam": "ipv4", 00:17:49.024 "trsvcid": "4420", 00:17:49.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:49.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.024 "prchk_reftag": false, 00:17:49.024 "prchk_guard": false, 00:17:49.024 "hdgst": false, 00:17:49.024 "ddgst": false, 00:17:49.024 "dhchap_key": "key0", 00:17:49.024 "dhchap_ctrlr_key": "key1", 00:17:49.024 "method": "bdev_nvme_attach_controller", 00:17:49.024 "req_id": 1 00:17:49.024 } 00:17:49.024 Got JSON-RPC error response 00:17:49.024 response: 00:17:49.024 { 00:17:49.024 "code": -5, 00:17:49.024 "message": "Input/output error" 00:17:49.024 } 00:17:49.024 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:49.024 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.024 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.024 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.024 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:49.024 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:49.283 00:17:49.283 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:49.283 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:49.284 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2954728 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2954728 ']' 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2954728 00:17:49.543 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2954728 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2954728' 00:17:49.544 killing process with pid 2954728 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2954728 00:17:49.544 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2954728 00:17:50.114 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:50.114 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.114 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:50.114 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.114 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:50.114 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.115 rmmod nvme_tcp 00:17:50.115 rmmod nvme_fabrics 00:17:50.115 rmmod nvme_keyring 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2974608 ']' 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2974608 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2974608 ']' 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2974608 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2974608 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2974608' 00:17:50.115 killing process with pid 2974608 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2974608 00:17:50.115 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2974608 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.375 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SGg /tmp/spdk.key-sha256.nyg /tmp/spdk.key-sha384.txO /tmp/spdk.key-sha512.fKB /tmp/spdk.key-sha512.JvO /tmp/spdk.key-sha384.5rM /tmp/spdk.key-sha256.A2E '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:52.286 00:17:52.286 real 2m7.156s 00:17:52.286 user 4m52.588s 00:17:52.286 sys 0m17.740s 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.286 ************************************ 00:17:52.286 END TEST nvmf_auth_target 00:17:52.286 ************************************ 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.286 ************************************ 00:17:52.286 START TEST nvmf_bdevio_no_huge 00:17:52.286 ************************************ 00:17:52.286 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:52.547 * Looking for test storage... 00:17:52.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.547 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.548 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:57.835 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:57.835 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:57.835 Found net devices under 0000:86:00.0: cvl_0_0 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.835 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:57.836 Found net devices under 0000:86:00.1: cvl_0_1 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.836 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:17:57.836 00:17:57.836 --- 10.0.0.2 ping statistics --- 00:17:57.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.836 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.492 ms 00:17:57.836 00:17:57.836 --- 10.0.0.1 ping statistics --- 00:17:57.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.836 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2978867 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2978867 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2978867 ']' 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.836 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.836 [2024-07-26 13:59:25.210450] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:17:57.836 [2024-07-26 13:59:25.210498] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:58.097 [2024-07-26 13:59:25.274592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.097 [2024-07-26 13:59:25.359900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.097 [2024-07-26 13:59:25.359933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.097 [2024-07-26 13:59:25.359939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.097 [2024-07-26 13:59:25.359945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.097 [2024-07-26 13:59:25.359950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.097 [2024-07-26 13:59:25.360082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.097 [2024-07-26 13:59:25.360188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:58.097 [2024-07-26 13:59:25.360274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.097 [2024-07-26 13:59:25.360275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.668 [2024-07-26 13:59:26.071252] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.668 Malloc0 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.668 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.928 [2024-07-26 13:59:26.107484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.928 { 00:17:58.928 "params": { 00:17:58.928 "name": "Nvme$subsystem", 00:17:58.928 "trtype": "$TEST_TRANSPORT", 00:17:58.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.928 "adrfam": "ipv4", 00:17:58.928 "trsvcid": "$NVMF_PORT", 00:17:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.928 "hdgst": ${hdgst:-false}, 00:17:58.928 "ddgst": ${ddgst:-false} 00:17:58.928 }, 00:17:58.928 "method": "bdev_nvme_attach_controller" 00:17:58.928 } 00:17:58.928 EOF 00:17:58.928 )") 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:58.928 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.928 "params": { 00:17:58.928 "name": "Nvme1", 00:17:58.928 "trtype": "tcp", 00:17:58.928 "traddr": "10.0.0.2", 00:17:58.928 "adrfam": "ipv4", 00:17:58.928 "trsvcid": "4420", 00:17:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.928 "hdgst": false, 00:17:58.928 "ddgst": false 00:17:58.928 }, 00:17:58.928 "method": "bdev_nvme_attach_controller" 00:17:58.928 }' 00:17:58.928 [2024-07-26 13:59:26.155585] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:17:58.928 [2024-07-26 13:59:26.155631] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2978925 ] 00:17:58.928 [2024-07-26 13:59:26.208875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:58.928 [2024-07-26 13:59:26.295162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.928 [2024-07-26 13:59:26.295258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.928 [2024-07-26 13:59:26.295258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.188 I/O targets: 00:17:59.188 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:59.188 00:17:59.188 00:17:59.188 CUnit - A unit testing framework for C - Version 2.1-3 00:17:59.188 http://cunit.sourceforge.net/ 00:17:59.188 00:17:59.188 00:17:59.188 Suite: bdevio tests on: Nvme1n1 00:17:59.448 Test: blockdev write read block ...passed 00:17:59.448 Test: blockdev write zeroes read block ...passed 00:17:59.448 Test: blockdev write zeroes read no split ...passed 00:17:59.448 Test: blockdev write zeroes read split ...passed 00:17:59.448 Test: blockdev write zeroes read split partial ...passed 00:17:59.448 Test: blockdev reset ...[2024-07-26 13:59:26.855778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:59.448 [2024-07-26 13:59:26.855841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1814300 (9): Bad file descriptor 00:17:59.448 [2024-07-26 13:59:26.874738] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:59.448 passed 00:17:59.448 Test: blockdev write read 8 blocks ...passed 00:17:59.708 Test: blockdev write read size > 128k ...passed 00:17:59.708 Test: blockdev write read invalid size ...passed 00:17:59.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.708 Test: blockdev write read max offset ...passed 00:17:59.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.708 Test: blockdev writev readv 8 blocks ...passed 00:17:59.708 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.708 Test: blockdev writev readv block ...passed 00:17:59.708 Test: blockdev writev readv size > 128k ...passed 00:17:59.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.708 Test: blockdev comparev and writev ...[2024-07-26 13:59:27.115607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.708 [2024-07-26 13:59:27.115633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.708 [2024-07-26 13:59:27.115646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.708 [2024-07-26 13:59:27.115654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.708 [2024-07-26 13:59:27.116216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.708 [2024-07-26 13:59:27.116228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.708 [2024-07-26 13:59:27.116251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.708 [2024-07-26 13:59:27.116260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.708 [2024-07-26 13:59:27.116735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.708 [2024-07-26 13:59:27.116746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.709 [2024-07-26 13:59:27.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.709 [2024-07-26 13:59:27.116765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.709 [2024-07-26 13:59:27.117236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.709 [2024-07-26 13:59:27.117248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.709 [2024-07-26 13:59:27.117259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.709 [2024-07-26 13:59:27.117267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.969 passed 00:17:59.969 Test: blockdev nvme passthru rw ...passed 00:17:59.969 Test: blockdev nvme passthru vendor specific ...[2024-07-26 13:59:27.202060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.969 [2024-07-26 13:59:27.202075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.969 [2024-07-26 13:59:27.202575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.969 [2024-07-26 13:59:27.202587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.969 [2024-07-26 13:59:27.203089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.969 [2024-07-26 13:59:27.203100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.969 [2024-07-26 13:59:27.203513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.969 [2024-07-26 13:59:27.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.969 passed 00:17:59.969 Test: blockdev nvme admin passthru ...passed 00:17:59.969 Test: blockdev copy ...passed 00:17:59.969 00:17:59.969 Run Summary: Type Total Ran Passed Failed Inactive 00:17:59.969 suites 1 1 n/a 0 0 00:17:59.969 tests 23 23 23 0 0 00:17:59.969 asserts 152 152 152 0 n/a 00:17:59.969 00:17:59.969 Elapsed time = 1.326 seconds 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.230 rmmod nvme_tcp 00:18:00.230 rmmod nvme_fabrics 00:18:00.230 rmmod nvme_keyring 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2978867 ']' 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2978867 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2978867 ']' 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2978867 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2978867 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2978867' 00:18:00.230 killing process with pid 2978867 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2978867 00:18:00.230 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2978867 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.801 13:59:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.712 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:02.712 00:18:02.712 real 0m10.331s 00:18:02.712 user 0m14.308s 00:18:02.712 sys 0m4.882s 00:18:02.712 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.712 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.712 ************************************ 00:18:02.712 END TEST nvmf_bdevio_no_huge 00:18:02.713 ************************************ 00:18:02.713 13:59:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:02.713 13:59:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:02.713 13:59:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.713 13:59:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.713 ************************************ 00:18:02.713 START TEST nvmf_tls 00:18:02.713 ************************************ 00:18:02.713 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:02.973 * Looking for test storage... 00:18:02.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.973 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:02.974 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.314 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:08.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:08.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:08.315 Found net devices under 0000:86:00.0: cvl_0_0 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:08.315 Found net devices under 0000:86:00.1: cvl_0_1 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:08.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:18:08.315 00:18:08.315 --- 10.0.0.2 ping statistics --- 00:18:08.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.315 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:18:08.315 00:18:08.315 --- 10.0.0.1 ping statistics --- 00:18:08.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.315 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:08.315 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2982648 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2982648 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2982648 ']' 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.316 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.576 [2024-07-26 13:59:35.794778] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:08.576 [2024-07-26 13:59:35.794819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.576 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.576 [2024-07-26 13:59:35.851689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.576 [2024-07-26 13:59:35.923579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.576 [2024-07-26 13:59:35.923621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.576 [2024-07-26 13:59:35.923627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.576 [2024-07-26 13:59:35.923633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.576 [2024-07-26 13:59:35.923638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.576 [2024-07-26 13:59:35.923660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:09.515 true 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.515 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:09.775 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:09.775 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:09.775 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:09.775 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.775 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:10.034 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:10.034 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:10.034 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:10.294 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.294 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:10.294 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:10.294 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:10.294 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.294 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:10.554 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:10.554 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:10.554 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:10.814 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.814 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:10.814 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:10.814 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:10.814 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:11.074 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.074 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.tR7ZY0cqiT 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.gLlrxkTmor 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.tR7ZY0cqiT 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gLlrxkTmor 00:18:11.334 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:11.594 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:11.855 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.tR7ZY0cqiT 00:18:11.855 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tR7ZY0cqiT 00:18:11.855 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:11.855 [2024-07-26 13:59:39.204449] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.855 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.115 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.115 [2024-07-26 13:59:39.537299] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.115 [2024-07-26 13:59:39.537476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.375 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.375 malloc0 00:18:12.375 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.634 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tR7ZY0cqiT 00:18:12.634 [2024-07-26 13:59:40.042965] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:12.634 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tR7ZY0cqiT 00:18:12.895 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.888 Initializing NVMe Controllers 00:18:22.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:22.888 Initialization complete. Launching workers. 00:18:22.888 ======================================================== 00:18:22.888 Latency(us) 00:18:22.888 Device Information : IOPS MiB/s Average min max 00:18:22.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16376.30 63.97 3908.53 797.66 6787.31 00:18:22.888 ======================================================== 00:18:22.888 Total : 16376.30 63.97 3908.53 797.66 6787.31 00:18:22.888 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tR7ZY0cqiT 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tR7ZY0cqiT' 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2985075 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2985075 /var/tmp/bdevperf.sock 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2985075 ']' 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.888 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.888 [2024-07-26 13:59:50.217306] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:22.888 [2024-07-26 13:59:50.217354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985075 ] 00:18:22.888 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.888 [2024-07-26 13:59:50.266679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.149 [2024-07-26 13:59:50.347612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.718 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.718 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:23.719 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tR7ZY0cqiT 00:18:23.979 [2024-07-26 13:59:51.181523] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.979 [2024-07-26 13:59:51.181593] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:23.979 TLSTESTn1 00:18:23.979 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:23.979 Running I/O for 10 seconds... 00:18:36.194 00:18:36.195 Latency(us) 00:18:36.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.195 Verification LBA range: start 0x0 length 0x2000 00:18:36.195 TLSTESTn1 : 10.09 1072.11 4.19 0.00 0.00 118944.24 7180.47 171419.38 00:18:36.195 =================================================================================================================== 00:18:36.195 Total : 1072.11 4.19 0.00 0.00 118944.24 7180.47 171419.38 00:18:36.195 0 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2985075 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2985075 ']' 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2985075 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2985075 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2985075' 00:18:36.195 killing process with pid 2985075 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2985075 00:18:36.195 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.195 00:18:36.195 Latency(us) 00:18:36.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.195 =================================================================================================================== 00:18:36.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.195 [2024-07-26 14:00:01.569502] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2985075 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLlrxkTmor 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLlrxkTmor 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLlrxkTmor 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gLlrxkTmor' 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2987157 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2987157 /var/tmp/bdevperf.sock 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2987157 ']' 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.195 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.195 [2024-07-26 14:00:01.798659] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:36.195 [2024-07-26 14:00:01.798706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987157 ] 00:18:36.195 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.195 [2024-07-26 14:00:01.847719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.195 [2024-07-26 14:00:01.926881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLlrxkTmor 00:18:36.195 [2024-07-26 14:00:02.773631] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.195 [2024-07-26 14:00:02.773701] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:36.195 [2024-07-26 14:00:02.779300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:36.195 [2024-07-26 14:00:02.780316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5570 (107): Transport endpoint is not connected 00:18:36.195 [2024-07-26 14:00:02.781309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5570 (9): Bad file descriptor 00:18:36.195 [2024-07-26 14:00:02.782310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.195 [2024-07-26 14:00:02.782321] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:36.195 [2024-07-26 14:00:02.782331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.195 request: 00:18:36.195 { 00:18:36.195 "name": "TLSTEST", 00:18:36.195 "trtype": "tcp", 00:18:36.195 "traddr": "10.0.0.2", 00:18:36.195 "adrfam": "ipv4", 00:18:36.195 "trsvcid": "4420", 00:18:36.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.195 "prchk_reftag": false, 00:18:36.195 "prchk_guard": false, 00:18:36.195 "hdgst": false, 00:18:36.195 "ddgst": false, 00:18:36.195 "psk": "/tmp/tmp.gLlrxkTmor", 00:18:36.195 "method": "bdev_nvme_attach_controller", 00:18:36.195 "req_id": 1 00:18:36.195 } 00:18:36.195 Got JSON-RPC error response 00:18:36.195 response: 00:18:36.195 { 00:18:36.195 "code": -5, 00:18:36.195 "message": "Input/output error" 00:18:36.195 } 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2987157 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2987157 ']' 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2987157 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2987157 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2987157' 00:18:36.195 killing process with pid 2987157 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2987157 00:18:36.195 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.195 00:18:36.195 Latency(us) 00:18:36.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.195 =================================================================================================================== 00:18:36.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.195 [2024-07-26 14:00:02.844167] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:36.195 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2987157 00:18:36.195 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:36.195 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:36.195 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.195 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tR7ZY0cqiT 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tR7ZY0cqiT 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tR7ZY0cqiT 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tR7ZY0cqiT' 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2987415 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2987415 /var/tmp/bdevperf.sock 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2987415 ']' 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.196 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.196 [2024-07-26 14:00:03.064435] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:36.196 [2024-07-26 14:00:03.064482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987415 ] 00:18:36.196 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.196 [2024-07-26 14:00:03.113414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.196 [2024-07-26 14:00:03.190951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.457 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.457 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:36.457 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.tR7ZY0cqiT 00:18:36.717 [2024-07-26 14:00:04.044511] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.717 [2024-07-26 14:00:04.044577] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:36.717 [2024-07-26 14:00:04.052435] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:36.717 [2024-07-26 14:00:04.052459] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:36.717 [2024-07-26 14:00:04.052483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:36.717 [2024-07-26 14:00:04.053102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029570 (107): Transport endpoint is not connected 00:18:36.717 [2024-07-26 14:00:04.054096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029570 (9): Bad file descriptor 00:18:36.718 [2024-07-26 14:00:04.055097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.718 [2024-07-26 14:00:04.055106] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:36.718 [2024-07-26 14:00:04.055114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.718 request: 00:18:36.718 { 00:18:36.718 "name": "TLSTEST", 00:18:36.718 "trtype": "tcp", 00:18:36.718 "traddr": "10.0.0.2", 00:18:36.718 "adrfam": "ipv4", 00:18:36.718 "trsvcid": "4420", 00:18:36.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.718 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:36.718 "prchk_reftag": false, 00:18:36.718 "prchk_guard": false, 00:18:36.718 "hdgst": false, 00:18:36.718 "ddgst": false, 00:18:36.718 "psk": "/tmp/tmp.tR7ZY0cqiT", 00:18:36.718 "method": "bdev_nvme_attach_controller", 00:18:36.718 "req_id": 1 00:18:36.718 } 00:18:36.718 Got JSON-RPC error response 00:18:36.718 response: 00:18:36.718 { 00:18:36.718 "code": -5, 00:18:36.718 "message": "Input/output error" 00:18:36.718 } 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2987415 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2987415 ']' 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2987415 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2987415 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2987415' 00:18:36.718 killing process with pid 2987415 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2987415 00:18:36.718 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.718 00:18:36.718 Latency(us) 00:18:36.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.718 =================================================================================================================== 00:18:36.718 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.718 [2024-07-26 14:00:04.128575] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:36.718 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2987415 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tR7ZY0cqiT 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tR7ZY0cqiT 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tR7ZY0cqiT 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tR7ZY0cqiT' 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2987582 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2987582 /var/tmp/bdevperf.sock 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2987582 ']' 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.978 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.978 [2024-07-26 14:00:04.352530] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:36.978 [2024-07-26 14:00:04.352579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987582 ] 00:18:36.978 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.978 [2024-07-26 14:00:04.401521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.237 [2024-07-26 14:00:04.480515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.806 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.806 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:37.806 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tR7ZY0cqiT 00:18:38.109 [2024-07-26 14:00:05.313987] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.109 [2024-07-26 14:00:05.314061] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:38.109 [2024-07-26 14:00:05.325678] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.109 [2024-07-26 14:00:05.325700] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.109 [2024-07-26 14:00:05.325722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:38.109 [2024-07-26 14:00:05.326513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121d570 (107): Transport endpoint is not connected 00:18:38.109 [2024-07-26 14:00:05.327507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121d570 (9): Bad file descriptor 00:18:38.109 [2024-07-26 14:00:05.328508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:38.109 [2024-07-26 14:00:05.328518] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:38.109 [2024-07-26 14:00:05.328527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:38.109 request: 00:18:38.109 { 00:18:38.109 "name": "TLSTEST", 00:18:38.109 "trtype": "tcp", 00:18:38.110 "traddr": "10.0.0.2", 00:18:38.110 "adrfam": "ipv4", 00:18:38.110 "trsvcid": "4420", 00:18:38.110 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:38.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.110 "prchk_reftag": false, 00:18:38.110 "prchk_guard": false, 00:18:38.110 "hdgst": false, 00:18:38.110 "ddgst": false, 00:18:38.110 "psk": "/tmp/tmp.tR7ZY0cqiT", 00:18:38.110 "method": "bdev_nvme_attach_controller", 00:18:38.110 "req_id": 1 00:18:38.110 } 00:18:38.110 Got JSON-RPC error response 00:18:38.110 response: 00:18:38.110 { 00:18:38.110 "code": -5, 00:18:38.110 "message": "Input/output error" 00:18:38.110 } 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2987582 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2987582 ']' 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2987582 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2987582 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2987582' 00:18:38.110 killing process with pid 2987582 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2987582 00:18:38.110 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.110 00:18:38.110 Latency(us) 00:18:38.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.110 =================================================================================================================== 00:18:38.110 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.110 [2024-07-26 14:00:05.403118] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:38.110 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2987582 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:38.387 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2987756 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2987756 /var/tmp/bdevperf.sock 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2987756 ']' 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.388 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.388 [2024-07-26 14:00:05.630812] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:38.388 [2024-07-26 14:00:05.630858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987756 ] 00:18:38.388 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.388 [2024-07-26 14:00:05.684786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.388 [2024-07-26 14:00:05.760542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:39.327 [2024-07-26 14:00:06.598949] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:39.327 [2024-07-26 14:00:06.600935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8caf0 (9): Bad file descriptor 00:18:39.327 [2024-07-26 14:00:06.601933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:39.327 [2024-07-26 14:00:06.601944] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:39.327 [2024-07-26 14:00:06.601952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.327 request: 00:18:39.327 { 00:18:39.327 "name": "TLSTEST", 00:18:39.327 "trtype": "tcp", 00:18:39.327 "traddr": "10.0.0.2", 00:18:39.327 "adrfam": "ipv4", 00:18:39.327 "trsvcid": "4420", 00:18:39.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.327 "prchk_reftag": false, 00:18:39.327 "prchk_guard": false, 00:18:39.327 "hdgst": false, 00:18:39.327 "ddgst": false, 00:18:39.327 "method": "bdev_nvme_attach_controller", 00:18:39.327 "req_id": 1 00:18:39.327 } 00:18:39.327 Got JSON-RPC error response 00:18:39.327 response: 00:18:39.327 { 00:18:39.327 "code": -5, 00:18:39.327 "message": "Input/output error" 00:18:39.327 } 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2987756 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2987756 ']' 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2987756 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2987756 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:39.327 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:39.328 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2987756' 00:18:39.328 killing process with pid 2987756 00:18:39.328 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2987756 00:18:39.328 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.328 00:18:39.328 Latency(us) 00:18:39.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.328 =================================================================================================================== 00:18:39.328 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.328 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2987756 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2982648 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2982648 ']' 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2982648 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2982648 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2982648' 00:18:39.588 killing process with pid 2982648 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2982648 00:18:39.588 [2024-07-26 14:00:06.881749] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:39.588 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2982648 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Tcdtzul2Zs 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Tcdtzul2Zs 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2988100 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2988100 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2988100 ']' 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.848 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.848 [2024-07-26 14:00:07.176607] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:39.848 [2024-07-26 14:00:07.176654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.848 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.848 [2024-07-26 14:00:07.233479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.109 [2024-07-26 14:00:07.313007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.109 [2024-07-26 14:00:07.313041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.109 [2024-07-26 14:00:07.313055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.109 [2024-07-26 14:00:07.313061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.109 [2024-07-26 14:00:07.313066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.109 [2024-07-26 14:00:07.313083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.680 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.680 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:40.680 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.680 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.680 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.680 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.680 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Tcdtzul2Zs 00:18:40.680 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Tcdtzul2Zs 00:18:40.680 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.940 [2024-07-26 14:00:08.172538] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.940 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.940 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:41.200 [2024-07-26 14:00:08.513435] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.200 [2024-07-26 14:00:08.513613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.200 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:41.459 malloc0 00:18:41.459 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:41.459 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:18:41.717 [2024-07-26 14:00:09.026881] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:41.717 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tcdtzul2Zs 00:18:41.717 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Tcdtzul2Zs' 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2988623 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2988623 /var/tmp/bdevperf.sock 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2988623 ']' 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.718 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.718 [2024-07-26 14:00:09.072144] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:41.718 [2024-07-26 14:00:09.072190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988623 ] 00:18:41.718 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.718 [2024-07-26 14:00:09.120998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.976 [2024-07-26 14:00:09.195242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.976 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.976 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:41.976 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:18:42.234 [2024-07-26 14:00:09.451865] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.234 [2024-07-26 14:00:09.451932] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:42.234 TLSTESTn1 00:18:42.234 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:42.234 Running I/O for 10 seconds... 00:18:54.448 00:18:54.448 Latency(us) 00:18:54.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.448 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.448 Verification LBA range: start 0x0 length 0x2000 00:18:54.448 TLSTESTn1 : 10.09 1093.68 4.27 0.00 0.00 116642.37 6211.67 172331.19 00:18:54.448 =================================================================================================================== 00:18:54.448 Total : 1093.68 4.27 0.00 0.00 116642.37 6211.67 172331.19 00:18:54.448 0 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2988623 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2988623 ']' 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2988623 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2988623 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2988623' 00:18:54.448 killing process with pid 2988623 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2988623 00:18:54.448 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.448 00:18:54.448 Latency(us) 00:18:54.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.448 =================================================================================================================== 00:18:54.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.448 [2024-07-26 14:00:19.847602] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:54.448 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2988623 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Tcdtzul2Zs 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tcdtzul2Zs 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tcdtzul2Zs 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tcdtzul2Zs 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Tcdtzul2Zs' 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2990628 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2990628 /var/tmp/bdevperf.sock 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2990628 ']' 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.448 [2024-07-26 14:00:20.086637] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:54.448 [2024-07-26 14:00:20.086692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990628 ] 00:18:54.448 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.448 [2024-07-26 14:00:20.138071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.448 [2024-07-26 14:00:20.211401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:54.448 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:18:54.448 [2024-07-26 14:00:21.054221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.448 [2024-07-26 14:00:21.054268] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:54.448 [2024-07-26 14:00:21.054275] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Tcdtzul2Zs 00:18:54.448 request: 00:18:54.448 { 00:18:54.448 "name": "TLSTEST", 00:18:54.448 "trtype": "tcp", 00:18:54.448 "traddr": "10.0.0.2", 00:18:54.448 "adrfam": "ipv4", 00:18:54.448 "trsvcid": "4420", 00:18:54.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.448 "prchk_reftag": false, 00:18:54.448 "prchk_guard": false, 00:18:54.448 "hdgst": false, 00:18:54.448 "ddgst": false, 00:18:54.448 "psk": "/tmp/tmp.Tcdtzul2Zs", 00:18:54.448 "method": "bdev_nvme_attach_controller", 00:18:54.448 "req_id": 1 00:18:54.448 } 00:18:54.449 Got JSON-RPC error response 00:18:54.449 response: 00:18:54.449 { 00:18:54.449 "code": -1, 00:18:54.449 "message": "Operation not permitted" 00:18:54.449 } 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2990628 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2990628 ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2990628 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2990628 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2990628' 00:18:54.449 killing process with pid 2990628 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2990628 00:18:54.449 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.449 00:18:54.449 Latency(us) 00:18:54.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.449 =================================================================================================================== 00:18:54.449 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2990628 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2988100 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2988100 ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2988100 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2988100 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2988100' 00:18:54.449 killing process with pid 2988100 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2988100 00:18:54.449 [2024-07-26 14:00:21.327600] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2988100 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2990871 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2990871 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2990871 ']' 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.449 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.449 [2024-07-26 14:00:21.572981] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:54.449 [2024-07-26 14:00:21.573029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.449 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.449 [2024-07-26 14:00:21.628512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.449 [2024-07-26 14:00:21.705314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.449 [2024-07-26 14:00:21.705349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.449 [2024-07-26 14:00:21.705356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.449 [2024-07-26 14:00:21.705361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.449 [2024-07-26 14:00:21.705367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.449 [2024-07-26 14:00:21.705386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Tcdtzul2Zs 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Tcdtzul2Zs 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Tcdtzul2Zs 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Tcdtzul2Zs 00:18:55.020 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.280 [2024-07-26 14:00:22.571180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.280 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.541 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.541 [2024-07-26 14:00:22.924085] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.541 [2024-07-26 14:00:22.924256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.541 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.800 malloc0 00:18:55.800 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.059 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:18:56.060 [2024-07-26 14:00:23.413425] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:56.060 [2024-07-26 14:00:23.413451] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:56.060 [2024-07-26 14:00:23.413473] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:56.060 request: 00:18:56.060 { 00:18:56.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.060 "host": "nqn.2016-06.io.spdk:host1", 00:18:56.060 "psk": "/tmp/tmp.Tcdtzul2Zs", 00:18:56.060 "method": "nvmf_subsystem_add_host", 00:18:56.060 "req_id": 1 00:18:56.060 } 00:18:56.060 Got JSON-RPC error response 00:18:56.060 response: 00:18:56.060 { 00:18:56.060 "code": -32603, 00:18:56.060 "message": "Internal error" 00:18:56.060 } 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2990871 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2990871 ']' 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2990871 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2990871 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2990871' 00:18:56.060 killing process with pid 2990871 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2990871 00:18:56.060 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2990871 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Tcdtzul2Zs 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2991145 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2991145 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2991145 ']' 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.319 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.319 [2024-07-26 14:00:23.722281] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:56.319 [2024-07-26 14:00:23.722329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.319 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.579 [2024-07-26 14:00:23.779869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.580 [2024-07-26 14:00:23.858239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.580 [2024-07-26 14:00:23.858278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.580 [2024-07-26 14:00:23.858284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.580 [2024-07-26 14:00:23.858290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.580 [2024-07-26 14:00:23.858296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.580 [2024-07-26 14:00:23.858313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Tcdtzul2Zs 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Tcdtzul2Zs 00:18:57.149 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.409 [2024-07-26 14:00:24.706183] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.409 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.669 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.669 [2024-07-26 14:00:25.043049] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.669 [2024-07-26 14:00:25.043219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.669 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.929 malloc0 00:18:57.929 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:18:58.190 [2024-07-26 14:00:25.576509] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2991617 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2991617 /var/tmp/bdevperf.sock 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2991617 ']' 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.190 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.449 [2024-07-26 14:00:25.638359] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:18:58.449 [2024-07-26 14:00:25.638404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991617 ] 00:18:58.449 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.449 [2024-07-26 14:00:25.688182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.449 [2024-07-26 14:00:25.762035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.017 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.017 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:59.017 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:18:59.276 [2024-07-26 14:00:26.592617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.276 [2024-07-26 14:00:26.592688] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.276 TLSTESTn1 00:18:59.276 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:59.536 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:59.536 "subsystems": [ 00:18:59.536 { 00:18:59.536 "subsystem": "keyring", 00:18:59.536 "config": [] 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "subsystem": "iobuf", 00:18:59.536 "config": [ 00:18:59.536 { 00:18:59.536 "method": "iobuf_set_options", 00:18:59.536 "params": { 00:18:59.536 "small_pool_count": 8192, 00:18:59.536 "large_pool_count": 1024, 00:18:59.536 "small_bufsize": 8192, 00:18:59.536 "large_bufsize": 135168 00:18:59.536 } 00:18:59.536 } 00:18:59.536 ] 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "subsystem": "sock", 00:18:59.536 "config": [ 00:18:59.536 { 00:18:59.536 "method": "sock_set_default_impl", 00:18:59.536 "params": { 00:18:59.536 "impl_name": "posix" 00:18:59.536 } 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "method": "sock_impl_set_options", 00:18:59.536 "params": { 00:18:59.536 "impl_name": "ssl", 00:18:59.536 "recv_buf_size": 4096, 00:18:59.536 "send_buf_size": 4096, 00:18:59.536 "enable_recv_pipe": true, 00:18:59.536 "enable_quickack": false, 00:18:59.536 "enable_placement_id": 0, 00:18:59.536 "enable_zerocopy_send_server": true, 00:18:59.536 "enable_zerocopy_send_client": false, 00:18:59.536 "zerocopy_threshold": 0, 00:18:59.536 "tls_version": 0, 00:18:59.536 "enable_ktls": false 00:18:59.536 } 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "method": "sock_impl_set_options", 00:18:59.536 "params": { 00:18:59.536 "impl_name": "posix", 00:18:59.536 "recv_buf_size": 2097152, 00:18:59.536 "send_buf_size": 2097152, 00:18:59.536 "enable_recv_pipe": true, 00:18:59.536 "enable_quickack": false, 00:18:59.536 "enable_placement_id": 0, 00:18:59.536 "enable_zerocopy_send_server": true, 00:18:59.536 "enable_zerocopy_send_client": false, 00:18:59.536 "zerocopy_threshold": 0, 00:18:59.536 "tls_version": 0, 00:18:59.536 "enable_ktls": false 00:18:59.536 } 00:18:59.536 } 00:18:59.536 ] 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "subsystem": "vmd", 00:18:59.536 "config": [] 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "subsystem": "accel", 00:18:59.536 "config": [ 00:18:59.536 { 00:18:59.536 "method": "accel_set_options", 00:18:59.536 "params": { 00:18:59.536 "small_cache_size": 128, 00:18:59.536 "large_cache_size": 16, 00:18:59.536 "task_count": 2048, 00:18:59.536 "sequence_count": 2048, 00:18:59.536 "buf_count": 2048 00:18:59.536 } 00:18:59.536 } 00:18:59.536 ] 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "subsystem": "bdev", 00:18:59.536 "config": [ 00:18:59.536 { 00:18:59.536 "method": "bdev_set_options", 00:18:59.536 "params": { 00:18:59.536 "bdev_io_pool_size": 65535, 00:18:59.536 "bdev_io_cache_size": 256, 00:18:59.536 "bdev_auto_examine": true, 00:18:59.536 "iobuf_small_cache_size": 128, 00:18:59.536 "iobuf_large_cache_size": 16 00:18:59.536 } 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "method": "bdev_raid_set_options", 00:18:59.536 "params": { 00:18:59.536 "process_window_size_kb": 1024, 00:18:59.536 "process_max_bandwidth_mb_sec": 0 00:18:59.536 } 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "method": "bdev_iscsi_set_options", 00:18:59.536 "params": { 00:18:59.536 "timeout_sec": 30 00:18:59.536 } 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "method": "bdev_nvme_set_options", 00:18:59.536 "params": { 00:18:59.536 "action_on_timeout": "none", 00:18:59.536 "timeout_us": 0, 00:18:59.536 "timeout_admin_us": 0, 00:18:59.536 "keep_alive_timeout_ms": 10000, 00:18:59.536 "arbitration_burst": 0, 00:18:59.536 "low_priority_weight": 0, 00:18:59.536 "medium_priority_weight": 0, 00:18:59.536 "high_priority_weight": 0, 00:18:59.536 "nvme_adminq_poll_period_us": 10000, 00:18:59.536 "nvme_ioq_poll_period_us": 0, 00:18:59.536 "io_queue_requests": 0, 00:18:59.536 "delay_cmd_submit": true, 00:18:59.536 "transport_retry_count": 4, 00:18:59.536 "bdev_retry_count": 3, 00:18:59.536 "transport_ack_timeout": 0, 00:18:59.536 "ctrlr_loss_timeout_sec": 0, 00:18:59.536 "reconnect_delay_sec": 0, 00:18:59.536 "fast_io_fail_timeout_sec": 0, 00:18:59.536 "disable_auto_failback": false, 00:18:59.536 "generate_uuids": false, 00:18:59.536 "transport_tos": 0, 00:18:59.536 "nvme_error_stat": false, 00:18:59.536 "rdma_srq_size": 0, 00:18:59.536 "io_path_stat": false, 00:18:59.536 "allow_accel_sequence": false, 00:18:59.536 "rdma_max_cq_size": 0, 00:18:59.536 "rdma_cm_event_timeout_ms": 0, 00:18:59.536 "dhchap_digests": [ 00:18:59.536 "sha256", 00:18:59.536 "sha384", 00:18:59.536 "sha512" 00:18:59.536 ], 00:18:59.536 "dhchap_dhgroups": [ 00:18:59.536 "null", 00:18:59.536 "ffdhe2048", 00:18:59.536 "ffdhe3072", 00:18:59.536 "ffdhe4096", 00:18:59.536 "ffdhe6144", 00:18:59.536 "ffdhe8192" 00:18:59.536 ] 00:18:59.536 } 00:18:59.536 }, 00:18:59.536 { 00:18:59.536 "method": "bdev_nvme_set_hotplug", 00:18:59.537 "params": { 00:18:59.537 "period_us": 100000, 00:18:59.537 "enable": false 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "bdev_malloc_create", 00:18:59.537 "params": { 00:18:59.537 "name": "malloc0", 00:18:59.537 "num_blocks": 8192, 00:18:59.537 "block_size": 4096, 00:18:59.537 "physical_block_size": 4096, 00:18:59.537 "uuid": "8f94ae91-44f4-4afc-8894-9cdd4eb85494", 00:18:59.537 "optimal_io_boundary": 0, 00:18:59.537 "md_size": 0, 00:18:59.537 "dif_type": 0, 00:18:59.537 "dif_is_head_of_md": false, 00:18:59.537 "dif_pi_format": 0 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "bdev_wait_for_examine" 00:18:59.537 } 00:18:59.537 ] 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "subsystem": "nbd", 00:18:59.537 "config": [] 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "subsystem": "scheduler", 00:18:59.537 "config": [ 00:18:59.537 { 00:18:59.537 "method": "framework_set_scheduler", 00:18:59.537 "params": { 00:18:59.537 "name": "static" 00:18:59.537 } 00:18:59.537 } 00:18:59.537 ] 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "subsystem": "nvmf", 00:18:59.537 "config": [ 00:18:59.537 { 00:18:59.537 "method": "nvmf_set_config", 00:18:59.537 "params": { 00:18:59.537 "discovery_filter": "match_any", 00:18:59.537 "admin_cmd_passthru": { 00:18:59.537 "identify_ctrlr": false 00:18:59.537 } 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_set_max_subsystems", 00:18:59.537 "params": { 00:18:59.537 "max_subsystems": 1024 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_set_crdt", 00:18:59.537 "params": { 00:18:59.537 "crdt1": 0, 00:18:59.537 "crdt2": 0, 00:18:59.537 "crdt3": 0 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_create_transport", 00:18:59.537 "params": { 00:18:59.537 "trtype": "TCP", 00:18:59.537 "max_queue_depth": 128, 00:18:59.537 "max_io_qpairs_per_ctrlr": 127, 00:18:59.537 "in_capsule_data_size": 4096, 00:18:59.537 "max_io_size": 131072, 00:18:59.537 "io_unit_size": 131072, 00:18:59.537 "max_aq_depth": 128, 00:18:59.537 "num_shared_buffers": 511, 00:18:59.537 "buf_cache_size": 4294967295, 00:18:59.537 "dif_insert_or_strip": false, 00:18:59.537 "zcopy": false, 00:18:59.537 "c2h_success": false, 00:18:59.537 "sock_priority": 0, 00:18:59.537 "abort_timeout_sec": 1, 00:18:59.537 "ack_timeout": 0, 00:18:59.537 "data_wr_pool_size": 0 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_create_subsystem", 00:18:59.537 "params": { 00:18:59.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.537 "allow_any_host": false, 00:18:59.537 "serial_number": "SPDK00000000000001", 00:18:59.537 "model_number": "SPDK bdev Controller", 00:18:59.537 "max_namespaces": 10, 00:18:59.537 "min_cntlid": 1, 00:18:59.537 "max_cntlid": 65519, 00:18:59.537 "ana_reporting": false 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_subsystem_add_host", 00:18:59.537 "params": { 00:18:59.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.537 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.537 "psk": "/tmp/tmp.Tcdtzul2Zs" 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_subsystem_add_ns", 00:18:59.537 "params": { 00:18:59.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.537 "namespace": { 00:18:59.537 "nsid": 1, 00:18:59.537 "bdev_name": "malloc0", 00:18:59.537 "nguid": "8F94AE9144F44AFC88949CDD4EB85494", 00:18:59.537 "uuid": "8f94ae91-44f4-4afc-8894-9cdd4eb85494", 00:18:59.537 "no_auto_visible": false 00:18:59.537 } 00:18:59.537 } 00:18:59.537 }, 00:18:59.537 { 00:18:59.537 "method": "nvmf_subsystem_add_listener", 00:18:59.537 "params": { 00:18:59.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.537 "listen_address": { 00:18:59.537 "trtype": "TCP", 00:18:59.537 "adrfam": "IPv4", 00:18:59.537 "traddr": "10.0.0.2", 00:18:59.537 "trsvcid": "4420" 00:18:59.537 }, 00:18:59.537 "secure_channel": true 00:18:59.537 } 00:18:59.537 } 00:18:59.537 ] 00:18:59.537 } 00:18:59.537 ] 00:18:59.537 }' 00:18:59.537 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:59.797 "subsystems": [ 00:18:59.797 { 00:18:59.797 "subsystem": "keyring", 00:18:59.797 "config": [] 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "subsystem": "iobuf", 00:18:59.797 "config": [ 00:18:59.797 { 00:18:59.797 "method": "iobuf_set_options", 00:18:59.797 "params": { 00:18:59.797 "small_pool_count": 8192, 00:18:59.797 "large_pool_count": 1024, 00:18:59.797 "small_bufsize": 8192, 00:18:59.797 "large_bufsize": 135168 00:18:59.797 } 00:18:59.797 } 00:18:59.797 ] 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "subsystem": "sock", 00:18:59.797 "config": [ 00:18:59.797 { 00:18:59.797 "method": "sock_set_default_impl", 00:18:59.797 "params": { 00:18:59.797 "impl_name": "posix" 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "sock_impl_set_options", 00:18:59.797 "params": { 00:18:59.797 "impl_name": "ssl", 00:18:59.797 "recv_buf_size": 4096, 00:18:59.797 "send_buf_size": 4096, 00:18:59.797 "enable_recv_pipe": true, 00:18:59.797 "enable_quickack": false, 00:18:59.797 "enable_placement_id": 0, 00:18:59.797 "enable_zerocopy_send_server": true, 00:18:59.797 "enable_zerocopy_send_client": false, 00:18:59.797 "zerocopy_threshold": 0, 00:18:59.797 "tls_version": 0, 00:18:59.797 "enable_ktls": false 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "sock_impl_set_options", 00:18:59.797 "params": { 00:18:59.797 "impl_name": "posix", 00:18:59.797 "recv_buf_size": 2097152, 00:18:59.797 "send_buf_size": 2097152, 00:18:59.797 "enable_recv_pipe": true, 00:18:59.797 "enable_quickack": false, 00:18:59.797 "enable_placement_id": 0, 00:18:59.797 "enable_zerocopy_send_server": true, 00:18:59.797 "enable_zerocopy_send_client": false, 00:18:59.797 "zerocopy_threshold": 0, 00:18:59.797 "tls_version": 0, 00:18:59.797 "enable_ktls": false 00:18:59.797 } 00:18:59.797 } 00:18:59.797 ] 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "subsystem": "vmd", 00:18:59.797 "config": [] 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "subsystem": "accel", 00:18:59.797 "config": [ 00:18:59.797 { 00:18:59.797 "method": "accel_set_options", 00:18:59.797 "params": { 00:18:59.797 "small_cache_size": 128, 00:18:59.797 "large_cache_size": 16, 00:18:59.797 "task_count": 2048, 00:18:59.797 "sequence_count": 2048, 00:18:59.797 "buf_count": 2048 00:18:59.797 } 00:18:59.797 } 00:18:59.797 ] 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "subsystem": "bdev", 00:18:59.797 "config": [ 00:18:59.797 { 00:18:59.797 "method": "bdev_set_options", 00:18:59.797 "params": { 00:18:59.797 "bdev_io_pool_size": 65535, 00:18:59.797 "bdev_io_cache_size": 256, 00:18:59.797 "bdev_auto_examine": true, 00:18:59.797 "iobuf_small_cache_size": 128, 00:18:59.797 "iobuf_large_cache_size": 16 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "bdev_raid_set_options", 00:18:59.797 "params": { 00:18:59.797 "process_window_size_kb": 1024, 00:18:59.797 "process_max_bandwidth_mb_sec": 0 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "bdev_iscsi_set_options", 00:18:59.797 "params": { 00:18:59.797 "timeout_sec": 30 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "bdev_nvme_set_options", 00:18:59.797 "params": { 00:18:59.797 "action_on_timeout": "none", 00:18:59.797 "timeout_us": 0, 00:18:59.797 "timeout_admin_us": 0, 00:18:59.797 "keep_alive_timeout_ms": 10000, 00:18:59.797 "arbitration_burst": 0, 00:18:59.797 "low_priority_weight": 0, 00:18:59.797 "medium_priority_weight": 0, 00:18:59.797 "high_priority_weight": 0, 00:18:59.797 "nvme_adminq_poll_period_us": 10000, 00:18:59.797 "nvme_ioq_poll_period_us": 0, 00:18:59.797 "io_queue_requests": 512, 00:18:59.797 "delay_cmd_submit": true, 00:18:59.797 "transport_retry_count": 4, 00:18:59.797 "bdev_retry_count": 3, 00:18:59.797 "transport_ack_timeout": 0, 00:18:59.797 "ctrlr_loss_timeout_sec": 0, 00:18:59.797 "reconnect_delay_sec": 0, 00:18:59.797 "fast_io_fail_timeout_sec": 0, 00:18:59.797 "disable_auto_failback": false, 00:18:59.797 "generate_uuids": false, 00:18:59.797 "transport_tos": 0, 00:18:59.797 "nvme_error_stat": false, 00:18:59.797 "rdma_srq_size": 0, 00:18:59.797 "io_path_stat": false, 00:18:59.797 "allow_accel_sequence": false, 00:18:59.797 "rdma_max_cq_size": 0, 00:18:59.797 "rdma_cm_event_timeout_ms": 0, 00:18:59.797 "dhchap_digests": [ 00:18:59.797 "sha256", 00:18:59.797 "sha384", 00:18:59.797 "sha512" 00:18:59.797 ], 00:18:59.797 "dhchap_dhgroups": [ 00:18:59.797 "null", 00:18:59.797 "ffdhe2048", 00:18:59.797 "ffdhe3072", 00:18:59.797 "ffdhe4096", 00:18:59.797 "ffdhe6144", 00:18:59.797 "ffdhe8192" 00:18:59.797 ] 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "bdev_nvme_attach_controller", 00:18:59.797 "params": { 00:18:59.797 "name": "TLSTEST", 00:18:59.797 "trtype": "TCP", 00:18:59.797 "adrfam": "IPv4", 00:18:59.797 "traddr": "10.0.0.2", 00:18:59.797 "trsvcid": "4420", 00:18:59.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.797 "prchk_reftag": false, 00:18:59.797 "prchk_guard": false, 00:18:59.797 "ctrlr_loss_timeout_sec": 0, 00:18:59.797 "reconnect_delay_sec": 0, 00:18:59.797 "fast_io_fail_timeout_sec": 0, 00:18:59.797 "psk": "/tmp/tmp.Tcdtzul2Zs", 00:18:59.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.797 "hdgst": false, 00:18:59.797 "ddgst": false 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "bdev_nvme_set_hotplug", 00:18:59.797 "params": { 00:18:59.797 "period_us": 100000, 00:18:59.797 "enable": false 00:18:59.797 } 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "method": "bdev_wait_for_examine" 00:18:59.797 } 00:18:59.797 ] 00:18:59.797 }, 00:18:59.797 { 00:18:59.797 "subsystem": "nbd", 00:18:59.797 "config": [] 00:18:59.797 } 00:18:59.797 ] 00:18:59.797 }' 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2991617 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2991617 ']' 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2991617 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.797 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2991617 00:19:00.057 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:00.057 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2991617' 00:19:00.058 killing process with pid 2991617 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2991617 00:19:00.058 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.058 00:19:00.058 Latency(us) 00:19:00.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.058 =================================================================================================================== 00:19:00.058 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.058 [2024-07-26 14:00:27.263305] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2991617 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2991145 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2991145 ']' 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2991145 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2991145 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2991145' 00:19:00.058 killing process with pid 2991145 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2991145 00:19:00.058 [2024-07-26 14:00:27.492221] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:00.058 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2991145 00:19:00.318 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:00.318 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.318 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.318 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:00.318 "subsystems": [ 00:19:00.318 { 00:19:00.318 "subsystem": "keyring", 00:19:00.318 "config": [] 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "subsystem": "iobuf", 00:19:00.318 "config": [ 00:19:00.318 { 00:19:00.318 "method": "iobuf_set_options", 00:19:00.318 "params": { 00:19:00.318 "small_pool_count": 8192, 00:19:00.318 "large_pool_count": 1024, 00:19:00.318 "small_bufsize": 8192, 00:19:00.318 "large_bufsize": 135168 00:19:00.318 } 00:19:00.318 } 00:19:00.318 ] 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "subsystem": "sock", 00:19:00.318 "config": [ 00:19:00.318 { 00:19:00.318 "method": "sock_set_default_impl", 00:19:00.318 "params": { 00:19:00.318 "impl_name": "posix" 00:19:00.318 } 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "method": "sock_impl_set_options", 00:19:00.318 "params": { 00:19:00.318 "impl_name": "ssl", 00:19:00.318 "recv_buf_size": 4096, 00:19:00.318 "send_buf_size": 4096, 00:19:00.318 "enable_recv_pipe": true, 00:19:00.318 "enable_quickack": false, 00:19:00.318 "enable_placement_id": 0, 00:19:00.318 "enable_zerocopy_send_server": true, 00:19:00.318 "enable_zerocopy_send_client": false, 00:19:00.318 "zerocopy_threshold": 0, 00:19:00.318 "tls_version": 0, 00:19:00.318 "enable_ktls": false 00:19:00.318 } 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "method": "sock_impl_set_options", 00:19:00.318 "params": { 00:19:00.318 "impl_name": "posix", 00:19:00.318 "recv_buf_size": 2097152, 00:19:00.318 "send_buf_size": 2097152, 00:19:00.318 "enable_recv_pipe": true, 00:19:00.318 "enable_quickack": false, 00:19:00.318 "enable_placement_id": 0, 00:19:00.318 "enable_zerocopy_send_server": true, 00:19:00.318 "enable_zerocopy_send_client": false, 00:19:00.318 "zerocopy_threshold": 0, 00:19:00.318 "tls_version": 0, 00:19:00.318 "enable_ktls": false 00:19:00.318 } 00:19:00.318 } 00:19:00.318 ] 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "subsystem": "vmd", 00:19:00.318 "config": [] 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "subsystem": "accel", 00:19:00.318 "config": [ 00:19:00.318 { 00:19:00.318 "method": "accel_set_options", 00:19:00.318 "params": { 00:19:00.318 "small_cache_size": 128, 00:19:00.318 "large_cache_size": 16, 00:19:00.318 "task_count": 2048, 00:19:00.318 "sequence_count": 2048, 00:19:00.318 "buf_count": 2048 00:19:00.318 } 00:19:00.318 } 00:19:00.318 ] 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "subsystem": "bdev", 00:19:00.318 "config": [ 00:19:00.318 { 00:19:00.318 "method": "bdev_set_options", 00:19:00.318 "params": { 00:19:00.318 "bdev_io_pool_size": 65535, 00:19:00.318 "bdev_io_cache_size": 256, 00:19:00.318 "bdev_auto_examine": true, 00:19:00.318 "iobuf_small_cache_size": 128, 00:19:00.318 "iobuf_large_cache_size": 16 00:19:00.318 } 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "method": "bdev_raid_set_options", 00:19:00.318 "params": { 00:19:00.318 "process_window_size_kb": 1024, 00:19:00.318 "process_max_bandwidth_mb_sec": 0 00:19:00.318 } 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "method": "bdev_iscsi_set_options", 00:19:00.318 "params": { 00:19:00.318 "timeout_sec": 30 00:19:00.318 } 00:19:00.318 }, 00:19:00.318 { 00:19:00.318 "method": "bdev_nvme_set_options", 00:19:00.318 "params": { 00:19:00.318 "action_on_timeout": "none", 00:19:00.318 "timeout_us": 0, 00:19:00.318 "timeout_admin_us": 0, 00:19:00.318 "keep_alive_timeout_ms": 10000, 00:19:00.318 "arbitration_burst": 0, 00:19:00.318 "low_priority_weight": 0, 00:19:00.318 "medium_priority_weight": 0, 00:19:00.318 "high_priority_weight": 0, 00:19:00.318 "nvme_adminq_poll_period_us": 10000, 00:19:00.318 "nvme_ioq_poll_period_us": 0, 00:19:00.318 "io_queue_requests": 0, 00:19:00.318 "delay_cmd_submit": true, 00:19:00.318 "transport_retry_count": 4, 00:19:00.318 "bdev_retry_count": 3, 00:19:00.318 "transport_ack_timeout": 0, 00:19:00.318 "ctrlr_loss_timeout_sec": 0, 00:19:00.318 "reconnect_delay_sec": 0, 00:19:00.318 "fast_io_fail_timeout_sec": 0, 00:19:00.318 "disable_auto_failback": false, 00:19:00.318 "generate_uuids": false, 00:19:00.318 "transport_tos": 0, 00:19:00.318 "nvme_error_stat": false, 00:19:00.318 "rdma_srq_size": 0, 00:19:00.318 "io_path_stat": false, 00:19:00.318 "allow_accel_sequence": false, 00:19:00.319 "rdma_max_cq_size": 0, 00:19:00.319 "rdma_cm_event_timeout_ms": 0, 00:19:00.319 "dhchap_digests": [ 00:19:00.319 "sha256", 00:19:00.319 "sha384", 00:19:00.319 "sha512" 00:19:00.319 ], 00:19:00.319 "dhchap_dhgroups": [ 00:19:00.319 "null", 00:19:00.319 "ffdhe2048", 00:19:00.319 "ffdhe3072", 00:19:00.319 "ffdhe4096", 00:19:00.319 "ffdhe6144", 00:19:00.319 "ffdhe8192" 00:19:00.319 ] 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "bdev_nvme_set_hotplug", 00:19:00.319 "params": { 00:19:00.319 "period_us": 100000, 00:19:00.319 "enable": false 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "bdev_malloc_create", 00:19:00.319 "params": { 00:19:00.319 "name": "malloc0", 00:19:00.319 "num_blocks": 8192, 00:19:00.319 "block_size": 4096, 00:19:00.319 "physical_block_size": 4096, 00:19:00.319 "uuid": "8f94ae91-44f4-4afc-8894-9cdd4eb85494", 00:19:00.319 "optimal_io_boundary": 0, 00:19:00.319 "md_size": 0, 00:19:00.319 "dif_type": 0, 00:19:00.319 "dif_is_head_of_md": false, 00:19:00.319 "dif_pi_format": 0 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "bdev_wait_for_examine" 00:19:00.319 } 00:19:00.319 ] 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "subsystem": "nbd", 00:19:00.319 "config": [] 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "subsystem": "scheduler", 00:19:00.319 "config": [ 00:19:00.319 { 00:19:00.319 "method": "framework_set_scheduler", 00:19:00.319 "params": { 00:19:00.319 "name": "static" 00:19:00.319 } 00:19:00.319 } 00:19:00.319 ] 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "subsystem": "nvmf", 00:19:00.319 "config": [ 00:19:00.319 { 00:19:00.319 "method": "nvmf_set_config", 00:19:00.319 "params": { 00:19:00.319 "discovery_filter": "match_any", 00:19:00.319 "admin_cmd_passthru": { 00:19:00.319 "identify_ctrlr": false 00:19:00.319 } 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_set_max_subsystems", 00:19:00.319 "params": { 00:19:00.319 "max_subsystems": 1024 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_set_crdt", 00:19:00.319 "params": { 00:19:00.319 "crdt1": 0, 00:19:00.319 "crdt2": 0, 00:19:00.319 "crdt3": 0 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_create_transport", 00:19:00.319 "params": { 00:19:00.319 "trtype": "TCP", 00:19:00.319 "max_queue_depth": 128, 00:19:00.319 "max_io_qpairs_per_ctrlr": 127, 00:19:00.319 "in_capsule_data_size": 4096, 00:19:00.319 "max_io_size": 131072, 00:19:00.319 "io_unit_size": 131072, 00:19:00.319 "max_aq_depth": 128, 00:19:00.319 "num_shared_buffers": 511, 00:19:00.319 "buf_cache_size": 4294967295, 00:19:00.319 "dif_insert_or_strip": false, 00:19:00.319 "zcopy": false, 00:19:00.319 "c2h_success": false, 00:19:00.319 "sock_priority": 0, 00:19:00.319 "abort_timeout_sec": 1, 00:19:00.319 "ack_timeout": 0, 00:19:00.319 "data_wr_pool_size": 0 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_create_subsystem", 00:19:00.319 "params": { 00:19:00.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.319 "allow_any_host": false, 00:19:00.319 "serial_number": "SPDK00000000000001", 00:19:00.319 "model_number": "SPDK bdev Controller", 00:19:00.319 "max_namespaces": 10, 00:19:00.319 "min_cntlid": 1, 00:19:00.319 "max_cntlid": 65519, 00:19:00.319 "ana_reporting": false 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_subsystem_add_host", 00:19:00.319 "params": { 00:19:00.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.319 "host": "nqn.2016-06.io.spdk:host1", 00:19:00.319 "psk": "/tmp/tmp.Tcdtzul2Zs" 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_subsystem_add_ns", 00:19:00.319 "params": { 00:19:00.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.319 "namespace": { 00:19:00.319 "nsid": 1, 00:19:00.319 "bdev_name": "malloc0", 00:19:00.319 "nguid": "8F94AE9144F44AFC88949CDD4EB85494", 00:19:00.319 "uuid": "8f94ae91-44f4-4afc-8894-9cdd4eb85494", 00:19:00.319 "no_auto_visible": false 00:19:00.319 } 00:19:00.319 } 00:19:00.319 }, 00:19:00.319 { 00:19:00.319 "method": "nvmf_subsystem_add_listener", 00:19:00.319 "params": { 00:19:00.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.319 "listen_address": { 00:19:00.319 "trtype": "TCP", 00:19:00.319 "adrfam": "IPv4", 00:19:00.319 "traddr": "10.0.0.2", 00:19:00.319 "trsvcid": "4420" 00:19:00.319 }, 00:19:00.319 "secure_channel": true 00:19:00.319 } 00:19:00.319 } 00:19:00.319 ] 00:19:00.319 } 00:19:00.319 ] 00:19:00.319 }' 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2991874 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2991874 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2991874 ']' 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.319 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.319 [2024-07-26 14:00:27.743837] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:00.319 [2024-07-26 14:00:27.743883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.581 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.581 [2024-07-26 14:00:27.799869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.581 [2024-07-26 14:00:27.878312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.581 [2024-07-26 14:00:27.878348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.581 [2024-07-26 14:00:27.878356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.581 [2024-07-26 14:00:27.878362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.581 [2024-07-26 14:00:27.878367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.581 [2024-07-26 14:00:27.878413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.841 [2024-07-26 14:00:28.081771] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.841 [2024-07-26 14:00:28.115000] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.841 [2024-07-26 14:00:28.131031] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.841 [2024-07-26 14:00:28.131220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2992119 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2992119 /var/tmp/bdevperf.sock 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2992119 ']' 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.411 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:01.411 "subsystems": [ 00:19:01.411 { 00:19:01.411 "subsystem": "keyring", 00:19:01.411 "config": [] 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "subsystem": "iobuf", 00:19:01.411 "config": [ 00:19:01.411 { 00:19:01.411 "method": "iobuf_set_options", 00:19:01.411 "params": { 00:19:01.411 "small_pool_count": 8192, 00:19:01.411 "large_pool_count": 1024, 00:19:01.411 "small_bufsize": 8192, 00:19:01.411 "large_bufsize": 135168 00:19:01.411 } 00:19:01.411 } 00:19:01.411 ] 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "subsystem": "sock", 00:19:01.411 "config": [ 00:19:01.411 { 00:19:01.411 "method": "sock_set_default_impl", 00:19:01.411 "params": { 00:19:01.411 "impl_name": "posix" 00:19:01.411 } 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "method": "sock_impl_set_options", 00:19:01.411 "params": { 00:19:01.411 "impl_name": "ssl", 00:19:01.411 "recv_buf_size": 4096, 00:19:01.411 "send_buf_size": 4096, 00:19:01.411 "enable_recv_pipe": true, 00:19:01.411 "enable_quickack": false, 00:19:01.411 "enable_placement_id": 0, 00:19:01.411 "enable_zerocopy_send_server": true, 00:19:01.411 "enable_zerocopy_send_client": false, 00:19:01.411 "zerocopy_threshold": 0, 00:19:01.411 "tls_version": 0, 00:19:01.411 "enable_ktls": false 00:19:01.411 } 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "method": "sock_impl_set_options", 00:19:01.411 "params": { 00:19:01.411 "impl_name": "posix", 00:19:01.411 "recv_buf_size": 2097152, 00:19:01.411 "send_buf_size": 2097152, 00:19:01.411 "enable_recv_pipe": true, 00:19:01.411 "enable_quickack": false, 00:19:01.411 "enable_placement_id": 0, 00:19:01.411 "enable_zerocopy_send_server": true, 00:19:01.411 "enable_zerocopy_send_client": false, 00:19:01.411 "zerocopy_threshold": 0, 00:19:01.411 "tls_version": 0, 00:19:01.411 "enable_ktls": false 00:19:01.411 } 00:19:01.411 } 00:19:01.411 ] 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "subsystem": "vmd", 00:19:01.411 "config": [] 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "subsystem": "accel", 00:19:01.411 "config": [ 00:19:01.411 { 00:19:01.411 "method": "accel_set_options", 00:19:01.411 "params": { 00:19:01.411 "small_cache_size": 128, 00:19:01.411 "large_cache_size": 16, 00:19:01.411 "task_count": 2048, 00:19:01.411 "sequence_count": 2048, 00:19:01.411 "buf_count": 2048 00:19:01.411 } 00:19:01.411 } 00:19:01.411 ] 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "subsystem": "bdev", 00:19:01.411 "config": [ 00:19:01.411 { 00:19:01.411 "method": "bdev_set_options", 00:19:01.411 "params": { 00:19:01.411 "bdev_io_pool_size": 65535, 00:19:01.411 "bdev_io_cache_size": 256, 00:19:01.411 "bdev_auto_examine": true, 00:19:01.411 "iobuf_small_cache_size": 128, 00:19:01.411 "iobuf_large_cache_size": 16 00:19:01.411 } 00:19:01.411 }, 00:19:01.411 { 00:19:01.411 "method": "bdev_raid_set_options", 00:19:01.411 "params": { 00:19:01.411 "process_window_size_kb": 1024, 00:19:01.411 "process_max_bandwidth_mb_sec": 0 00:19:01.411 } 00:19:01.411 }, 00:19:01.411 { 00:19:01.412 "method": "bdev_iscsi_set_options", 00:19:01.412 "params": { 00:19:01.412 "timeout_sec": 30 00:19:01.412 } 00:19:01.412 }, 00:19:01.412 { 00:19:01.412 "method": "bdev_nvme_set_options", 00:19:01.412 "params": { 00:19:01.412 "action_on_timeout": "none", 00:19:01.412 "timeout_us": 0, 00:19:01.412 "timeout_admin_us": 0, 00:19:01.412 "keep_alive_timeout_ms": 10000, 00:19:01.412 "arbitration_burst": 0, 00:19:01.412 "low_priority_weight": 0, 00:19:01.412 "medium_priority_weight": 0, 00:19:01.412 "high_priority_weight": 0, 00:19:01.412 "nvme_adminq_poll_period_us": 10000, 00:19:01.412 "nvme_ioq_poll_period_us": 0, 00:19:01.412 "io_queue_requests": 512, 00:19:01.412 "delay_cmd_submit": true, 00:19:01.412 "transport_retry_count": 4, 00:19:01.412 "bdev_retry_count": 3, 00:19:01.412 "transport_ack_timeout": 0, 00:19:01.412 "ctrlr_loss_timeout_sec": 0, 00:19:01.412 "reconnect_delay_sec": 0, 00:19:01.412 "fast_io_fail_timeout_sec": 0, 00:19:01.412 "disable_auto_failback": false, 00:19:01.412 "generate_uuids": false, 00:19:01.412 "transport_tos": 0, 00:19:01.412 "nvme_error_stat": false, 00:19:01.412 "rdma_srq_size": 0, 00:19:01.412 "io_path_stat": false, 00:19:01.412 "allow_accel_sequence": false, 00:19:01.412 "rdma_max_cq_size": 0, 00:19:01.412 "rdma_cm_event_timeout_ms": 0, 00:19:01.412 "dhchap_digests": [ 00:19:01.412 "sha256", 00:19:01.412 "sha384", 00:19:01.412 "sha512" 00:19:01.412 ], 00:19:01.412 "dhchap_dhgroups": [ 00:19:01.412 "null", 00:19:01.412 "ffdhe2048", 00:19:01.412 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.412 "ffdhe3072", 00:19:01.412 "ffdhe4096", 00:19:01.412 "ffdhe6144", 00:19:01.412 "ffdhe8192" 00:19:01.412 ] 00:19:01.412 } 00:19:01.412 }, 00:19:01.412 { 00:19:01.412 "method": "bdev_nvme_attach_controller", 00:19:01.412 "params": { 00:19:01.412 "name": "TLSTEST", 00:19:01.412 "trtype": "TCP", 00:19:01.412 "adrfam": "IPv4", 00:19:01.412 "traddr": "10.0.0.2", 00:19:01.412 "trsvcid": "4420", 00:19:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.412 "prchk_reftag": false, 00:19:01.412 "prchk_guard": false, 00:19:01.412 "ctrlr_loss_timeout_sec": 0, 00:19:01.412 "reconnect_delay_sec": 0, 00:19:01.412 "fast_io_fail_timeout_sec": 0, 00:19:01.412 "psk": "/tmp/tmp.Tcdtzul2Zs", 00:19:01.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.412 "hdgst": false, 00:19:01.412 "ddgst": false 00:19:01.412 } 00:19:01.412 }, 00:19:01.412 { 00:19:01.412 "method": "bdev_nvme_set_hotplug", 00:19:01.412 "params": { 00:19:01.412 "period_us": 100000, 00:19:01.412 "enable": false 00:19:01.412 } 00:19:01.412 }, 00:19:01.412 { 00:19:01.412 "method": "bdev_wait_for_examine" 00:19:01.412 } 00:19:01.412 ] 00:19:01.412 }, 00:19:01.412 { 00:19:01.412 "subsystem": "nbd", 00:19:01.412 "config": [] 00:19:01.412 } 00:19:01.412 ] 00:19:01.412 }' 00:19:01.412 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.412 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.412 [2024-07-26 14:00:28.620379] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:01.412 [2024-07-26 14:00:28.620426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992119 ] 00:19:01.412 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.412 [2024-07-26 14:00:28.669434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.412 [2024-07-26 14:00:28.747747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.672 [2024-07-26 14:00:28.889281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.672 [2024-07-26 14:00:28.889359] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:02.241 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.241 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:02.241 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.241 Running I/O for 10 seconds... 00:19:12.310 00:19:12.310 Latency(us) 00:19:12.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.310 Verification LBA range: start 0x0 length 0x2000 00:19:12.310 TLSTESTn1 : 10.11 1100.78 4.30 0.00 0.00 115834.63 7208.96 197861.73 00:19:12.310 =================================================================================================================== 00:19:12.310 Total : 1100.78 4.30 0.00 0.00 115834.63 7208.96 197861.73 00:19:12.310 0 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2992119 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2992119 ']' 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2992119 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2992119 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2992119' 00:19:12.310 killing process with pid 2992119 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2992119 00:19:12.310 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.310 00:19:12.310 Latency(us) 00:19:12.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.310 =================================================================================================================== 00:19:12.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.310 [2024-07-26 14:00:39.700355] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:12.310 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2992119 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2991874 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2991874 ']' 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2991874 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2991874 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2991874' 00:19:12.570 killing process with pid 2991874 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2991874 00:19:12.570 [2024-07-26 14:00:39.925551] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:12.570 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2991874 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2993966 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2993966 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2993966 ']' 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.830 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.830 [2024-07-26 14:00:40.173319] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:12.830 [2024-07-26 14:00:40.173371] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.830 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.830 [2024-07-26 14:00:40.229876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.091 [2024-07-26 14:00:40.309050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.091 [2024-07-26 14:00:40.309084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.091 [2024-07-26 14:00:40.309091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.091 [2024-07-26 14:00:40.309098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.091 [2024-07-26 14:00:40.309103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.091 [2024-07-26 14:00:40.309124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.661 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.661 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:13.661 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.661 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.661 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.661 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Tcdtzul2Zs 00:19:13.661 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Tcdtzul2Zs 00:19:13.661 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:13.921 [2024-07-26 14:00:41.164676] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.921 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:14.181 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:14.181 [2024-07-26 14:00:41.505554] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.181 [2024-07-26 14:00:41.505720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.181 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.441 malloc0 00:19:14.441 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.441 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Tcdtzul2Zs 00:19:14.702 [2024-07-26 14:00:42.010861] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2994276 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2994276 /var/tmp/bdevperf.sock 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2994276 ']' 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.702 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.702 [2024-07-26 14:00:42.074252] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:14.702 [2024-07-26 14:00:42.074305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994276 ] 00:19:14.702 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.702 [2024-07-26 14:00:42.130186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.962 [2024-07-26 14:00:42.205427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.530 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.530 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.530 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tcdtzul2Zs 00:19:15.789 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:15.789 [2024-07-26 14:00:43.204834] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.046 nvme0n1 00:19:16.046 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.046 Running I/O for 1 seconds... 00:19:17.425 00:19:17.425 Latency(us) 00:19:17.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.425 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:17.425 Verification LBA range: start 0x0 length 0x2000 00:19:17.425 nvme0n1 : 1.13 899.29 3.51 0.00 0.00 136165.35 7208.96 171419.38 00:19:17.425 =================================================================================================================== 00:19:17.425 Total : 899.29 3.51 0.00 0.00 136165.35 7208.96 171419.38 00:19:17.425 0 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2994276 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2994276 ']' 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2994276 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2994276 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2994276' 00:19:17.425 killing process with pid 2994276 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2994276 00:19:17.425 Received shutdown signal, test time was about 1.000000 seconds 00:19:17.425 00:19:17.425 Latency(us) 00:19:17.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.425 =================================================================================================================== 00:19:17.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2994276 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2993966 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2993966 ']' 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2993966 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2993966 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2993966' 00:19:17.425 killing process with pid 2993966 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2993966 00:19:17.425 [2024-07-26 14:00:44.816382] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.425 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2993966 00:19:17.684 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:17.684 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2994857 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2994857 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2994857 ']' 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.685 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.685 [2024-07-26 14:00:45.063051] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:17.685 [2024-07-26 14:00:45.063119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.685 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.944 [2024-07-26 14:00:45.121153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.944 [2024-07-26 14:00:45.192016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.944 [2024-07-26 14:00:45.192062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.944 [2024-07-26 14:00:45.192069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.944 [2024-07-26 14:00:45.192075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.944 [2024-07-26 14:00:45.192080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.944 [2024-07-26 14:00:45.192098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.513 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.513 [2024-07-26 14:00:45.905716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.513 malloc0 00:19:18.513 [2024-07-26 14:00:45.934065] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.513 [2024-07-26 14:00:45.944198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2994951 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2994951 /var/tmp/bdevperf.sock 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2994951 ']' 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.773 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.774 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.774 [2024-07-26 14:00:46.013300] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:18.774 [2024-07-26 14:00:46.013340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994951 ] 00:19:18.774 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.774 [2024-07-26 14:00:46.066246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.774 [2024-07-26 14:00:46.139252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.711 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.711 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:19.711 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tcdtzul2Zs 00:19:19.711 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:19.971 [2024-07-26 14:00:47.167002] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.971 nvme0n1 00:19:19.971 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.971 Running I/O for 1 seconds... 00:19:21.351 00:19:21.351 Latency(us) 00:19:21.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.351 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:21.351 Verification LBA range: start 0x0 length 0x2000 00:19:21.351 nvme0n1 : 1.09 906.92 3.54 0.00 0.00 136920.39 7123.48 167772.16 00:19:21.351 =================================================================================================================== 00:19:21.351 Total : 906.92 3.54 0.00 0.00 136920.39 7123.48 167772.16 00:19:21.351 0 00:19:21.351 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:21.351 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.351 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.351 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.351 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:21.351 "subsystems": [ 00:19:21.351 { 00:19:21.351 "subsystem": "keyring", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "keyring_file_add_key", 00:19:21.351 "params": { 00:19:21.351 "name": "key0", 00:19:21.351 "path": "/tmp/tmp.Tcdtzul2Zs" 00:19:21.351 } 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "iobuf", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "iobuf_set_options", 00:19:21.351 "params": { 00:19:21.351 "small_pool_count": 8192, 00:19:21.351 "large_pool_count": 1024, 00:19:21.351 "small_bufsize": 8192, 00:19:21.351 "large_bufsize": 135168 00:19:21.351 } 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "sock", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "sock_set_default_impl", 00:19:21.351 "params": { 00:19:21.351 "impl_name": "posix" 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "sock_impl_set_options", 00:19:21.351 "params": { 00:19:21.351 "impl_name": "ssl", 00:19:21.351 "recv_buf_size": 4096, 00:19:21.351 "send_buf_size": 4096, 00:19:21.351 "enable_recv_pipe": true, 00:19:21.351 "enable_quickack": false, 00:19:21.351 "enable_placement_id": 0, 00:19:21.351 "enable_zerocopy_send_server": true, 00:19:21.351 "enable_zerocopy_send_client": false, 00:19:21.351 "zerocopy_threshold": 0, 00:19:21.351 "tls_version": 0, 00:19:21.351 "enable_ktls": false 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "sock_impl_set_options", 00:19:21.351 "params": { 00:19:21.351 "impl_name": "posix", 00:19:21.351 "recv_buf_size": 2097152, 00:19:21.351 "send_buf_size": 2097152, 00:19:21.351 "enable_recv_pipe": true, 00:19:21.351 "enable_quickack": false, 00:19:21.351 "enable_placement_id": 0, 00:19:21.351 "enable_zerocopy_send_server": true, 00:19:21.351 "enable_zerocopy_send_client": false, 00:19:21.351 "zerocopy_threshold": 0, 00:19:21.351 "tls_version": 0, 00:19:21.351 "enable_ktls": false 00:19:21.351 } 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "vmd", 00:19:21.351 "config": [] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "accel", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "accel_set_options", 00:19:21.351 "params": { 00:19:21.351 "small_cache_size": 128, 00:19:21.351 "large_cache_size": 16, 00:19:21.351 "task_count": 2048, 00:19:21.351 "sequence_count": 2048, 00:19:21.351 "buf_count": 2048 00:19:21.351 } 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "bdev", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "bdev_set_options", 00:19:21.351 "params": { 00:19:21.351 "bdev_io_pool_size": 65535, 00:19:21.351 "bdev_io_cache_size": 256, 00:19:21.351 "bdev_auto_examine": true, 00:19:21.351 "iobuf_small_cache_size": 128, 00:19:21.351 "iobuf_large_cache_size": 16 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "bdev_raid_set_options", 00:19:21.351 "params": { 00:19:21.351 "process_window_size_kb": 1024, 00:19:21.351 "process_max_bandwidth_mb_sec": 0 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "bdev_iscsi_set_options", 00:19:21.351 "params": { 00:19:21.351 "timeout_sec": 30 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "bdev_nvme_set_options", 00:19:21.351 "params": { 00:19:21.351 "action_on_timeout": "none", 00:19:21.351 "timeout_us": 0, 00:19:21.351 "timeout_admin_us": 0, 00:19:21.351 "keep_alive_timeout_ms": 10000, 00:19:21.351 "arbitration_burst": 0, 00:19:21.351 "low_priority_weight": 0, 00:19:21.351 "medium_priority_weight": 0, 00:19:21.351 "high_priority_weight": 0, 00:19:21.351 "nvme_adminq_poll_period_us": 10000, 00:19:21.351 "nvme_ioq_poll_period_us": 0, 00:19:21.351 "io_queue_requests": 0, 00:19:21.351 "delay_cmd_submit": true, 00:19:21.351 "transport_retry_count": 4, 00:19:21.351 "bdev_retry_count": 3, 00:19:21.351 "transport_ack_timeout": 0, 00:19:21.351 "ctrlr_loss_timeout_sec": 0, 00:19:21.351 "reconnect_delay_sec": 0, 00:19:21.351 "fast_io_fail_timeout_sec": 0, 00:19:21.351 "disable_auto_failback": false, 00:19:21.351 "generate_uuids": false, 00:19:21.351 "transport_tos": 0, 00:19:21.351 "nvme_error_stat": false, 00:19:21.351 "rdma_srq_size": 0, 00:19:21.351 "io_path_stat": false, 00:19:21.351 "allow_accel_sequence": false, 00:19:21.351 "rdma_max_cq_size": 0, 00:19:21.351 "rdma_cm_event_timeout_ms": 0, 00:19:21.351 "dhchap_digests": [ 00:19:21.351 "sha256", 00:19:21.351 "sha384", 00:19:21.351 "sha512" 00:19:21.351 ], 00:19:21.351 "dhchap_dhgroups": [ 00:19:21.351 "null", 00:19:21.351 "ffdhe2048", 00:19:21.351 "ffdhe3072", 00:19:21.351 "ffdhe4096", 00:19:21.351 "ffdhe6144", 00:19:21.351 "ffdhe8192" 00:19:21.351 ] 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "bdev_nvme_set_hotplug", 00:19:21.351 "params": { 00:19:21.351 "period_us": 100000, 00:19:21.351 "enable": false 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "bdev_malloc_create", 00:19:21.351 "params": { 00:19:21.351 "name": "malloc0", 00:19:21.351 "num_blocks": 8192, 00:19:21.351 "block_size": 4096, 00:19:21.351 "physical_block_size": 4096, 00:19:21.351 "uuid": "e40b657c-393e-4ca1-9f30-c65784aaafa6", 00:19:21.351 "optimal_io_boundary": 0, 00:19:21.351 "md_size": 0, 00:19:21.351 "dif_type": 0, 00:19:21.351 "dif_is_head_of_md": false, 00:19:21.351 "dif_pi_format": 0 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "bdev_wait_for_examine" 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "nbd", 00:19:21.351 "config": [] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "scheduler", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "framework_set_scheduler", 00:19:21.351 "params": { 00:19:21.351 "name": "static" 00:19:21.351 } 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "subsystem": "nvmf", 00:19:21.351 "config": [ 00:19:21.351 { 00:19:21.351 "method": "nvmf_set_config", 00:19:21.351 "params": { 00:19:21.351 "discovery_filter": "match_any", 00:19:21.351 "admin_cmd_passthru": { 00:19:21.351 "identify_ctrlr": false 00:19:21.351 } 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_set_max_subsystems", 00:19:21.351 "params": { 00:19:21.351 "max_subsystems": 1024 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_set_crdt", 00:19:21.351 "params": { 00:19:21.351 "crdt1": 0, 00:19:21.351 "crdt2": 0, 00:19:21.351 "crdt3": 0 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_create_transport", 00:19:21.351 "params": { 00:19:21.351 "trtype": "TCP", 00:19:21.351 "max_queue_depth": 128, 00:19:21.351 "max_io_qpairs_per_ctrlr": 127, 00:19:21.351 "in_capsule_data_size": 4096, 00:19:21.351 "max_io_size": 131072, 00:19:21.351 "io_unit_size": 131072, 00:19:21.351 "max_aq_depth": 128, 00:19:21.351 "num_shared_buffers": 511, 00:19:21.351 "buf_cache_size": 4294967295, 00:19:21.351 "dif_insert_or_strip": false, 00:19:21.351 "zcopy": false, 00:19:21.351 "c2h_success": false, 00:19:21.351 "sock_priority": 0, 00:19:21.351 "abort_timeout_sec": 1, 00:19:21.351 "ack_timeout": 0, 00:19:21.351 "data_wr_pool_size": 0 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_create_subsystem", 00:19:21.351 "params": { 00:19:21.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.351 "allow_any_host": false, 00:19:21.351 "serial_number": "00000000000000000000", 00:19:21.351 "model_number": "SPDK bdev Controller", 00:19:21.351 "max_namespaces": 32, 00:19:21.351 "min_cntlid": 1, 00:19:21.351 "max_cntlid": 65519, 00:19:21.351 "ana_reporting": false 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_subsystem_add_host", 00:19:21.351 "params": { 00:19:21.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.351 "host": "nqn.2016-06.io.spdk:host1", 00:19:21.351 "psk": "key0" 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_subsystem_add_ns", 00:19:21.351 "params": { 00:19:21.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.351 "namespace": { 00:19:21.351 "nsid": 1, 00:19:21.351 "bdev_name": "malloc0", 00:19:21.351 "nguid": "E40B657C393E4CA19F30C65784AAAFA6", 00:19:21.351 "uuid": "e40b657c-393e-4ca1-9f30-c65784aaafa6", 00:19:21.351 "no_auto_visible": false 00:19:21.351 } 00:19:21.351 } 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "method": "nvmf_subsystem_add_listener", 00:19:21.351 "params": { 00:19:21.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.351 "listen_address": { 00:19:21.351 "trtype": "TCP", 00:19:21.351 "adrfam": "IPv4", 00:19:21.351 "traddr": "10.0.0.2", 00:19:21.351 "trsvcid": "4420" 00:19:21.351 }, 00:19:21.351 "secure_channel": false, 00:19:21.351 "sock_impl": "ssl" 00:19:21.351 } 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }' 00:19:21.351 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:21.621 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:21.621 "subsystems": [ 00:19:21.621 { 00:19:21.621 "subsystem": "keyring", 00:19:21.621 "config": [ 00:19:21.621 { 00:19:21.621 "method": "keyring_file_add_key", 00:19:21.621 "params": { 00:19:21.621 "name": "key0", 00:19:21.621 "path": "/tmp/tmp.Tcdtzul2Zs" 00:19:21.621 } 00:19:21.621 } 00:19:21.621 ] 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "subsystem": "iobuf", 00:19:21.621 "config": [ 00:19:21.621 { 00:19:21.621 "method": "iobuf_set_options", 00:19:21.621 "params": { 00:19:21.621 "small_pool_count": 8192, 00:19:21.621 "large_pool_count": 1024, 00:19:21.621 "small_bufsize": 8192, 00:19:21.621 "large_bufsize": 135168 00:19:21.621 } 00:19:21.621 } 00:19:21.621 ] 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "subsystem": "sock", 00:19:21.621 "config": [ 00:19:21.621 { 00:19:21.621 "method": "sock_set_default_impl", 00:19:21.621 "params": { 00:19:21.621 "impl_name": "posix" 00:19:21.621 } 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "method": "sock_impl_set_options", 00:19:21.621 "params": { 00:19:21.621 "impl_name": "ssl", 00:19:21.621 "recv_buf_size": 4096, 00:19:21.621 "send_buf_size": 4096, 00:19:21.621 "enable_recv_pipe": true, 00:19:21.621 "enable_quickack": false, 00:19:21.621 "enable_placement_id": 0, 00:19:21.621 "enable_zerocopy_send_server": true, 00:19:21.621 "enable_zerocopy_send_client": false, 00:19:21.621 "zerocopy_threshold": 0, 00:19:21.621 "tls_version": 0, 00:19:21.621 "enable_ktls": false 00:19:21.621 } 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "method": "sock_impl_set_options", 00:19:21.621 "params": { 00:19:21.621 "impl_name": "posix", 00:19:21.621 "recv_buf_size": 2097152, 00:19:21.621 "send_buf_size": 2097152, 00:19:21.621 "enable_recv_pipe": true, 00:19:21.621 "enable_quickack": false, 00:19:21.621 "enable_placement_id": 0, 00:19:21.621 "enable_zerocopy_send_server": true, 00:19:21.621 "enable_zerocopy_send_client": false, 00:19:21.621 "zerocopy_threshold": 0, 00:19:21.621 "tls_version": 0, 00:19:21.621 "enable_ktls": false 00:19:21.621 } 00:19:21.621 } 00:19:21.621 ] 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "subsystem": "vmd", 00:19:21.621 "config": [] 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "subsystem": "accel", 00:19:21.621 "config": [ 00:19:21.621 { 00:19:21.621 "method": "accel_set_options", 00:19:21.621 "params": { 00:19:21.621 "small_cache_size": 128, 00:19:21.621 "large_cache_size": 16, 00:19:21.621 "task_count": 2048, 00:19:21.621 "sequence_count": 2048, 00:19:21.621 "buf_count": 2048 00:19:21.621 } 00:19:21.621 } 00:19:21.621 ] 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "subsystem": "bdev", 00:19:21.621 "config": [ 00:19:21.621 { 00:19:21.621 "method": "bdev_set_options", 00:19:21.621 "params": { 00:19:21.621 "bdev_io_pool_size": 65535, 00:19:21.621 "bdev_io_cache_size": 256, 00:19:21.621 "bdev_auto_examine": true, 00:19:21.621 "iobuf_small_cache_size": 128, 00:19:21.621 "iobuf_large_cache_size": 16 00:19:21.621 } 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "method": "bdev_raid_set_options", 00:19:21.621 "params": { 00:19:21.621 "process_window_size_kb": 1024, 00:19:21.621 "process_max_bandwidth_mb_sec": 0 00:19:21.621 } 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "method": "bdev_iscsi_set_options", 00:19:21.621 "params": { 00:19:21.621 "timeout_sec": 30 00:19:21.621 } 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "method": "bdev_nvme_set_options", 00:19:21.621 "params": { 00:19:21.621 "action_on_timeout": "none", 00:19:21.621 "timeout_us": 0, 00:19:21.621 "timeout_admin_us": 0, 00:19:21.621 "keep_alive_timeout_ms": 10000, 00:19:21.621 "arbitration_burst": 0, 00:19:21.621 "low_priority_weight": 0, 00:19:21.621 "medium_priority_weight": 0, 00:19:21.621 "high_priority_weight": 0, 00:19:21.621 "nvme_adminq_poll_period_us": 10000, 00:19:21.621 "nvme_ioq_poll_period_us": 0, 00:19:21.621 "io_queue_requests": 512, 00:19:21.621 "delay_cmd_submit": true, 00:19:21.621 "transport_retry_count": 4, 00:19:21.621 "bdev_retry_count": 3, 00:19:21.621 "transport_ack_timeout": 0, 00:19:21.621 "ctrlr_loss_timeout_sec": 0, 00:19:21.621 "reconnect_delay_sec": 0, 00:19:21.621 "fast_io_fail_timeout_sec": 0, 00:19:21.621 "disable_auto_failback": false, 00:19:21.621 "generate_uuids": false, 00:19:21.621 "transport_tos": 0, 00:19:21.621 "nvme_error_stat": false, 00:19:21.621 "rdma_srq_size": 0, 00:19:21.621 "io_path_stat": false, 00:19:21.621 "allow_accel_sequence": false, 00:19:21.621 "rdma_max_cq_size": 0, 00:19:21.621 "rdma_cm_event_timeout_ms": 0, 00:19:21.621 "dhchap_digests": [ 00:19:21.621 "sha256", 00:19:21.621 "sha384", 00:19:21.621 "sha512" 00:19:21.621 ], 00:19:21.621 "dhchap_dhgroups": [ 00:19:21.621 "null", 00:19:21.621 "ffdhe2048", 00:19:21.621 "ffdhe3072", 00:19:21.621 "ffdhe4096", 00:19:21.621 "ffdhe6144", 00:19:21.621 "ffdhe8192" 00:19:21.621 ] 00:19:21.621 } 00:19:21.621 }, 00:19:21.621 { 00:19:21.621 "method": "bdev_nvme_attach_controller", 00:19:21.621 "params": { 00:19:21.621 "name": "nvme0", 00:19:21.621 "trtype": "TCP", 00:19:21.621 "adrfam": "IPv4", 00:19:21.621 "traddr": "10.0.0.2", 00:19:21.622 "trsvcid": "4420", 00:19:21.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.622 "prchk_reftag": false, 00:19:21.622 "prchk_guard": false, 00:19:21.622 "ctrlr_loss_timeout_sec": 0, 00:19:21.622 "reconnect_delay_sec": 0, 00:19:21.622 "fast_io_fail_timeout_sec": 0, 00:19:21.622 "psk": "key0", 00:19:21.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.622 "hdgst": false, 00:19:21.622 "ddgst": false 00:19:21.622 } 00:19:21.622 }, 00:19:21.622 { 00:19:21.622 "method": "bdev_nvme_set_hotplug", 00:19:21.622 "params": { 00:19:21.622 "period_us": 100000, 00:19:21.622 "enable": false 00:19:21.622 } 00:19:21.622 }, 00:19:21.622 { 00:19:21.622 "method": "bdev_enable_histogram", 00:19:21.622 "params": { 00:19:21.622 "name": "nvme0n1", 00:19:21.622 "enable": true 00:19:21.622 } 00:19:21.622 }, 00:19:21.622 { 00:19:21.622 "method": "bdev_wait_for_examine" 00:19:21.622 } 00:19:21.622 ] 00:19:21.622 }, 00:19:21.622 { 00:19:21.622 "subsystem": "nbd", 00:19:21.622 "config": [] 00:19:21.622 } 00:19:21.622 ] 00:19:21.622 }' 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2994951 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2994951 ']' 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2994951 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2994951 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2994951' 00:19:21.622 killing process with pid 2994951 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2994951 00:19:21.622 Received shutdown signal, test time was about 1.000000 seconds 00:19:21.622 00:19:21.622 Latency(us) 00:19:21.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.622 =================================================================================================================== 00:19:21.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.622 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2994951 00:19:21.622 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2994857 00:19:21.622 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2994857 ']' 00:19:21.622 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2994857 00:19:21.622 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:21.880 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.880 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2994857 00:19:21.880 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.880 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.880 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2994857' 00:19:21.880 killing process with pid 2994857 00:19:21.881 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2994857 00:19:21.881 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2994857 00:19:21.881 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:21.881 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.881 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:21.881 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:21.881 "subsystems": [ 00:19:21.881 { 00:19:21.881 "subsystem": "keyring", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "keyring_file_add_key", 00:19:21.881 "params": { 00:19:21.881 "name": "key0", 00:19:21.881 "path": "/tmp/tmp.Tcdtzul2Zs" 00:19:21.881 } 00:19:21.881 } 00:19:21.881 ] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "iobuf", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "iobuf_set_options", 00:19:21.881 "params": { 00:19:21.881 "small_pool_count": 8192, 00:19:21.881 "large_pool_count": 1024, 00:19:21.881 "small_bufsize": 8192, 00:19:21.881 "large_bufsize": 135168 00:19:21.881 } 00:19:21.881 } 00:19:21.881 ] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "sock", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "sock_set_default_impl", 00:19:21.881 "params": { 00:19:21.881 "impl_name": "posix" 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "sock_impl_set_options", 00:19:21.881 "params": { 00:19:21.881 "impl_name": "ssl", 00:19:21.881 "recv_buf_size": 4096, 00:19:21.881 "send_buf_size": 4096, 00:19:21.881 "enable_recv_pipe": true, 00:19:21.881 "enable_quickack": false, 00:19:21.881 "enable_placement_id": 0, 00:19:21.881 "enable_zerocopy_send_server": true, 00:19:21.881 "enable_zerocopy_send_client": false, 00:19:21.881 "zerocopy_threshold": 0, 00:19:21.881 "tls_version": 0, 00:19:21.881 "enable_ktls": false 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "sock_impl_set_options", 00:19:21.881 "params": { 00:19:21.881 "impl_name": "posix", 00:19:21.881 "recv_buf_size": 2097152, 00:19:21.881 "send_buf_size": 2097152, 00:19:21.881 "enable_recv_pipe": true, 00:19:21.881 "enable_quickack": false, 00:19:21.881 "enable_placement_id": 0, 00:19:21.881 "enable_zerocopy_send_server": true, 00:19:21.881 "enable_zerocopy_send_client": false, 00:19:21.881 "zerocopy_threshold": 0, 00:19:21.881 "tls_version": 0, 00:19:21.881 "enable_ktls": false 00:19:21.881 } 00:19:21.881 } 00:19:21.881 ] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "vmd", 00:19:21.881 "config": [] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "accel", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "accel_set_options", 00:19:21.881 "params": { 00:19:21.881 "small_cache_size": 128, 00:19:21.881 "large_cache_size": 16, 00:19:21.881 "task_count": 2048, 00:19:21.881 "sequence_count": 2048, 00:19:21.881 "buf_count": 2048 00:19:21.881 } 00:19:21.881 } 00:19:21.881 ] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "bdev", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "bdev_set_options", 00:19:21.881 "params": { 00:19:21.881 "bdev_io_pool_size": 65535, 00:19:21.881 "bdev_io_cache_size": 256, 00:19:21.881 "bdev_auto_examine": true, 00:19:21.881 "iobuf_small_cache_size": 128, 00:19:21.881 "iobuf_large_cache_size": 16 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "bdev_raid_set_options", 00:19:21.881 "params": { 00:19:21.881 "process_window_size_kb": 1024, 00:19:21.881 "process_max_bandwidth_mb_sec": 0 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "bdev_iscsi_set_options", 00:19:21.881 "params": { 00:19:21.881 "timeout_sec": 30 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "bdev_nvme_set_options", 00:19:21.881 "params": { 00:19:21.881 "action_on_timeout": "none", 00:19:21.881 "timeout_us": 0, 00:19:21.881 "timeout_admin_us": 0, 00:19:21.881 "keep_alive_timeout_ms": 10000, 00:19:21.881 "arbitration_burst": 0, 00:19:21.881 "low_priority_weight": 0, 00:19:21.881 "medium_priority_weight": 0, 00:19:21.881 "high_priority_weight": 0, 00:19:21.881 "nvme_adminq_poll_period_us": 10000, 00:19:21.881 "nvme_ioq_poll_period_us": 0, 00:19:21.881 "io_queue_requests": 0, 00:19:21.881 "delay_cmd_submit": true, 00:19:21.881 "transport_retry_count": 4, 00:19:21.881 "bdev_retry_count": 3, 00:19:21.881 "transport_ack_timeout": 0, 00:19:21.881 "ctrlr_loss_timeout_sec": 0, 00:19:21.881 "reconnect_delay_sec": 0, 00:19:21.881 "fast_io_fail_timeout_sec": 0, 00:19:21.881 "disable_auto_failback": false, 00:19:21.881 "generate_uuids": false, 00:19:21.881 "transport_tos": 0, 00:19:21.881 "nvme_error_stat": false, 00:19:21.881 "rdma_srq_size": 0, 00:19:21.881 "io_path_stat": false, 00:19:21.881 "allow_accel_sequence": false, 00:19:21.881 "rdma_max_cq_size": 0, 00:19:21.881 "rdma_cm_event_timeout_ms": 0, 00:19:21.881 "dhchap_digests": [ 00:19:21.881 "sha256", 00:19:21.881 "sha384", 00:19:21.881 "sha512" 00:19:21.881 ], 00:19:21.881 "dhchap_dhgroups": [ 00:19:21.881 "null", 00:19:21.881 "ffdhe2048", 00:19:21.881 "ffdhe3072", 00:19:21.881 "ffdhe4096", 00:19:21.881 "ffdhe6144", 00:19:21.881 "ffdhe8192" 00:19:21.881 ] 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "bdev_nvme_set_hotplug", 00:19:21.881 "params": { 00:19:21.881 "period_us": 100000, 00:19:21.881 "enable": false 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "bdev_malloc_create", 00:19:21.881 "params": { 00:19:21.881 "name": "malloc0", 00:19:21.881 "num_blocks": 8192, 00:19:21.881 "block_size": 4096, 00:19:21.881 "physical_block_size": 4096, 00:19:21.881 "uuid": "e40b657c-393e-4ca1-9f30-c65784aaafa6", 00:19:21.881 "optimal_io_boundary": 0, 00:19:21.881 "md_size": 0, 00:19:21.881 "dif_type": 0, 00:19:21.881 "dif_is_head_of_md": false, 00:19:21.881 "dif_pi_format": 0 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "bdev_wait_for_examine" 00:19:21.881 } 00:19:21.881 ] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "nbd", 00:19:21.881 "config": [] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "scheduler", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "framework_set_scheduler", 00:19:21.881 "params": { 00:19:21.881 "name": "static" 00:19:21.881 } 00:19:21.881 } 00:19:21.881 ] 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "subsystem": "nvmf", 00:19:21.881 "config": [ 00:19:21.881 { 00:19:21.881 "method": "nvmf_set_config", 00:19:21.881 "params": { 00:19:21.881 "discovery_filter": "match_any", 00:19:21.881 "admin_cmd_passthru": { 00:19:21.881 "identify_ctrlr": false 00:19:21.881 } 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "nvmf_set_max_subsystems", 00:19:21.881 "params": { 00:19:21.881 "max_subsystems": 1024 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "nvmf_set_crdt", 00:19:21.881 "params": { 00:19:21.881 "crdt1": 0, 00:19:21.881 "crdt2": 0, 00:19:21.881 "crdt3": 0 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "nvmf_create_transport", 00:19:21.881 "params": { 00:19:21.881 "trtype": "TCP", 00:19:21.881 "max_queue_depth": 128, 00:19:21.881 "max_io_qpairs_per_ctrlr": 127, 00:19:21.881 "in_capsule_data_size": 4096, 00:19:21.881 "max_io_size": 131072, 00:19:21.881 "io_unit_size": 131072, 00:19:21.881 "max_aq_depth": 128, 00:19:21.881 "num_shared_buffers": 511, 00:19:21.881 "buf_cache_size": 4294967295, 00:19:21.881 "dif_insert_or_strip": false, 00:19:21.881 "zcopy": false, 00:19:21.881 "c2h_success": false, 00:19:21.881 "sock_priority": 0, 00:19:21.881 "abort_timeout_sec": 1, 00:19:21.881 "ack_timeout": 0, 00:19:21.881 "data_wr_pool_size": 0 00:19:21.881 } 00:19:21.881 }, 00:19:21.881 { 00:19:21.881 "method": "nvmf_create_subsystem", 00:19:21.881 "params": { 00:19:21.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.881 "allow_any_host": false, 00:19:21.881 "serial_number": "00000000000000000000", 00:19:21.881 "model_number": "SPDK bdev Controller", 00:19:21.881 "max_namespaces": 32, 00:19:21.881 "min_cntlid": 1, 00:19:21.881 "max_cntlid": 65519, 00:19:21.882 "ana_reporting": false 00:19:21.882 } 00:19:21.882 }, 00:19:21.882 { 00:19:21.882 "method": "nvmf_subsystem_add_host", 00:19:21.882 "params": { 00:19:21.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.882 "host": "nqn.2016-06.io.spdk:host1", 00:19:21.882 "psk": "key0" 00:19:21.882 } 00:19:21.882 }, 00:19:21.882 { 00:19:21.882 "method": "nvmf_subsystem_add_ns", 00:19:21.882 "params": { 00:19:21.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.882 "namespace": { 00:19:21.882 "nsid": 1, 00:19:21.882 "bdev_name": "malloc0", 00:19:21.882 "nguid": "E40B657C393E4CA19F30C65784AAAFA6", 00:19:21.882 "uuid": "e40b657c-393e-4ca1-9f30-c65784aaafa6", 00:19:21.882 "no_auto_visible": false 00:19:21.882 } 00:19:21.882 } 00:19:21.882 }, 00:19:21.882 { 00:19:21.882 "method": "nvmf_subsystem_add_listener", 00:19:21.882 "params": { 00:19:21.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.882 "listen_address": { 00:19:21.882 "trtype": "TCP", 00:19:21.882 "adrfam": "IPv4", 00:19:21.882 "traddr": "10.0.0.2", 00:19:21.882 "trsvcid": "4420" 00:19:21.882 }, 00:19:21.882 "secure_channel": false, 00:19:21.882 "sock_impl": "ssl" 00:19:21.882 } 00:19:21.882 } 00:19:21.882 ] 00:19:21.882 } 00:19:21.882 ] 00:19:21.882 }' 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2995632 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2995632 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2995632 ']' 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.882 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.141 [2024-07-26 14:00:49.343162] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:22.141 [2024-07-26 14:00:49.343213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.141 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.141 [2024-07-26 14:00:49.401218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.141 [2024-07-26 14:00:49.479638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.141 [2024-07-26 14:00:49.479673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.141 [2024-07-26 14:00:49.479681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.141 [2024-07-26 14:00:49.479687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.141 [2024-07-26 14:00:49.479692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.141 [2024-07-26 14:00:49.479737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.400 [2024-07-26 14:00:49.690639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.400 [2024-07-26 14:00:49.727688] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:22.400 [2024-07-26 14:00:49.727859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2995673 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2995673 /var/tmp/bdevperf.sock 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2995673 ']' 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.970 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:22.970 "subsystems": [ 00:19:22.970 { 00:19:22.970 "subsystem": "keyring", 00:19:22.970 "config": [ 00:19:22.970 { 00:19:22.970 "method": "keyring_file_add_key", 00:19:22.970 "params": { 00:19:22.970 "name": "key0", 00:19:22.970 "path": "/tmp/tmp.Tcdtzul2Zs" 00:19:22.970 } 00:19:22.970 } 00:19:22.970 ] 00:19:22.970 }, 00:19:22.970 { 00:19:22.970 "subsystem": "iobuf", 00:19:22.970 "config": [ 00:19:22.970 { 00:19:22.970 "method": "iobuf_set_options", 00:19:22.970 "params": { 00:19:22.970 "small_pool_count": 8192, 00:19:22.970 "large_pool_count": 1024, 00:19:22.970 "small_bufsize": 8192, 00:19:22.970 "large_bufsize": 135168 00:19:22.970 } 00:19:22.970 } 00:19:22.970 ] 00:19:22.970 }, 00:19:22.970 { 00:19:22.970 "subsystem": "sock", 00:19:22.970 "config": [ 00:19:22.970 { 00:19:22.970 "method": "sock_set_default_impl", 00:19:22.970 "params": { 00:19:22.970 "impl_name": "posix" 00:19:22.970 } 00:19:22.970 }, 00:19:22.971 { 00:19:22.971 "method": "sock_impl_set_options", 00:19:22.971 "params": { 00:19:22.971 "impl_name": "ssl", 00:19:22.971 "recv_buf_size": 4096, 00:19:22.971 "send_buf_size": 4096, 00:19:22.971 "enable_recv_pipe": true, 00:19:22.971 "enable_quickack": false, 00:19:22.971 "enable_placement_id": 0, 00:19:22.971 "enable_zerocopy_send_server": true, 00:19:22.971 "enable_zerocopy_send_client": false, 00:19:22.971 "zerocopy_threshold": 0, 00:19:22.971 "tls_version": 0, 00:19:22.971 "enable_ktls": false 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "sock_impl_set_options", 00:19:22.971 "params": { 00:19:22.971 "impl_name": "posix", 00:19:22.971 "recv_buf_size": 2097152, 00:19:22.971 "send_buf_size": 2097152, 00:19:22.971 "enable_recv_pipe": true, 00:19:22.971 "enable_quickack": false, 00:19:22.971 "enable_placement_id": 0, 00:19:22.971 "enable_zerocopy_send_server": true, 00:19:22.971 "enable_zerocopy_send_client": false, 00:19:22.971 "zerocopy_threshold": 0, 00:19:22.971 "tls_version": 0, 00:19:22.971 "enable_ktls": false 00:19:22.971 } 00:19:22.971 } 00:19:22.971 ] 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "subsystem": "vmd", 00:19:22.971 "config": [] 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "subsystem": "accel", 00:19:22.971 "config": [ 00:19:22.971 { 00:19:22.971 "method": "accel_set_options", 00:19:22.971 "params": { 00:19:22.971 "small_cache_size": 128, 00:19:22.971 "large_cache_size": 16, 00:19:22.971 "task_count": 2048, 00:19:22.971 "sequence_count": 2048, 00:19:22.971 "buf_count": 2048 00:19:22.971 } 00:19:22.971 } 00:19:22.971 ] 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "subsystem": "bdev", 00:19:22.971 "config": [ 00:19:22.971 { 00:19:22.971 "method": "bdev_set_options", 00:19:22.971 "params": { 00:19:22.971 "bdev_io_pool_size": 65535, 00:19:22.971 "bdev_io_cache_size": 256, 00:19:22.971 "bdev_auto_examine": true, 00:19:22.971 "iobuf_small_cache_size": 128, 00:19:22.971 "iobuf_large_cache_size": 16 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_raid_set_options", 00:19:22.971 "params": { 00:19:22.971 "process_window_size_kb": 1024, 00:19:22.971 "process_max_bandwidth_mb_sec": 0 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_iscsi_set_options", 00:19:22.971 "params": { 00:19:22.971 "timeout_sec": 30 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_nvme_set_options", 00:19:22.971 "params": { 00:19:22.971 "action_on_timeout": "none", 00:19:22.971 "timeout_us": 0, 00:19:22.971 "timeout_admin_us": 0, 00:19:22.971 "keep_alive_timeout_ms": 10000, 00:19:22.971 "arbitration_burst": 0, 00:19:22.971 "low_priority_weight": 0, 00:19:22.971 "medium_priority_weight": 0, 00:19:22.971 "high_priority_weight": 0, 00:19:22.971 "nvme_adminq_poll_period_us": 10000, 00:19:22.971 "nvme_ioq_poll_period_us": 0, 00:19:22.971 "io_queue_requests": 512, 00:19:22.971 "delay_cmd_submit": true, 00:19:22.971 "transport_retry_count": 4, 00:19:22.971 "bdev_retry_count": 3, 00:19:22.971 "transport_ack_timeout": 0, 00:19:22.971 "ctrlr_loss_timeout_sec": 0, 00:19:22.971 "reconnect_delay_sec": 0, 00:19:22.971 "fast_io_fail_timeout_sec": 0, 00:19:22.971 "disable_auto_failback": false, 00:19:22.971 "generate_uuids": false, 00:19:22.971 "transport_tos": 0, 00:19:22.971 "nvme_error_stat": false, 00:19:22.971 "rdma_srq_size": 0, 00:19:22.971 "io_path_stat": false, 00:19:22.971 "allow_accel_sequence": false, 00:19:22.971 "rdma_max_cq_size": 0, 00:19:22.971 "rdma_cm_event_timeout_ms": 0, 00:19:22.971 "dhchap_digests": [ 00:19:22.971 "sha256", 00:19:22.971 "sha384", 00:19:22.971 "sha512" 00:19:22.971 ], 00:19:22.971 "dhchap_dhgroups": [ 00:19:22.971 "null", 00:19:22.971 "ffdhe2048", 00:19:22.971 "ffdhe3072", 00:19:22.971 "ffdhe4096", 00:19:22.971 "ffdhe6144", 00:19:22.971 "ffdhe8192" 00:19:22.971 ] 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_nvme_attach_controller", 00:19:22.971 "params": { 00:19:22.971 "name": "nvme0", 00:19:22.971 "trtype": "TCP", 00:19:22.971 "adrfam": "IPv4", 00:19:22.971 "traddr": "10.0.0.2", 00:19:22.971 "trsvcid": "4420", 00:19:22.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.971 "prchk_reftag": false, 00:19:22.971 "prchk_guard": false, 00:19:22.971 "ctrlr_loss_timeout_sec": 0, 00:19:22.971 "reconnect_delay_sec": 0, 00:19:22.971 "fast_io_fail_timeout_sec": 0, 00:19:22.971 "psk": "key0", 00:19:22.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.971 "hdgst": false, 00:19:22.971 "ddgst": false 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_nvme_set_hotplug", 00:19:22.971 "params": { 00:19:22.971 "period_us": 100000, 00:19:22.971 "enable": false 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_enable_histogram", 00:19:22.971 "params": { 00:19:22.971 "name": "nvme0n1", 00:19:22.971 "enable": true 00:19:22.971 } 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "method": "bdev_wait_for_examine" 00:19:22.971 } 00:19:22.971 ] 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "subsystem": "nbd", 00:19:22.971 "config": [] 00:19:22.971 } 00:19:22.971 ] 00:19:22.971 }' 00:19:22.971 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.971 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.971 [2024-07-26 14:00:50.232977] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:22.971 [2024-07-26 14:00:50.233020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995673 ] 00:19:22.971 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.971 [2024-07-26 14:00:50.287458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.971 [2024-07-26 14:00:50.364091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.231 [2024-07-26 14:00:50.515596] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.800 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.800 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:23.800 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:23.800 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:23.800 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.800 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.059 Running I/O for 1 seconds... 00:19:24.997 00:19:24.997 Latency(us) 00:19:24.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.997 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:24.997 Verification LBA range: start 0x0 length 0x2000 00:19:24.997 nvme0n1 : 1.08 802.13 3.13 0.00 0.00 155757.42 7208.96 190567.29 00:19:24.997 =================================================================================================================== 00:19:24.997 Total : 802.13 3.13 0.00 0.00 155757.42 7208.96 190567.29 00:19:24.997 0 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:24.998 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:24.998 nvmf_trace.0 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2995673 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2995673 ']' 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2995673 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2995673 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2995673' 00:19:25.258 killing process with pid 2995673 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2995673 00:19:25.258 Received shutdown signal, test time was about 1.000000 seconds 00:19:25.258 00:19:25.258 Latency(us) 00:19:25.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.258 =================================================================================================================== 00:19:25.258 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.258 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2995673 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.517 rmmod nvme_tcp 00:19:25.517 rmmod nvme_fabrics 00:19:25.517 rmmod nvme_keyring 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2995632 ']' 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2995632 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2995632 ']' 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2995632 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2995632 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2995632' 00:19:25.517 killing process with pid 2995632 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2995632 00:19:25.517 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2995632 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.776 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tR7ZY0cqiT /tmp/tmp.gLlrxkTmor /tmp/tmp.Tcdtzul2Zs 00:19:27.688 00:19:27.688 real 1m24.984s 00:19:27.688 user 2m13.668s 00:19:27.688 sys 0m26.433s 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.688 ************************************ 00:19:27.688 END TEST nvmf_tls 00:19:27.688 ************************************ 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.688 14:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.948 ************************************ 00:19:27.948 START TEST nvmf_fips 00:19:27.948 ************************************ 00:19:27.948 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:27.948 * Looking for test storage... 00:19:27.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:27.948 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.948 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:27.948 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:27.949 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:27.950 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:28.210 Error setting digest 00:19:28.210 00E29FA7537F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:28.210 00E29FA7537F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.210 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:33.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:33.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:33.520 Found net devices under 0000:86:00.0: cvl_0_0 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:33.520 Found net devices under 0000:86:00.1: cvl_0_1 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.520 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:33.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:33.521 00:19:33.521 --- 10.0.0.2 ping statistics --- 00:19:33.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.521 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:19:33.521 00:19:33.521 --- 10.0.0.1 ping statistics --- 00:19:33.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.521 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2999587 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2999587 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2999587 ']' 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.521 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:33.521 [2024-07-26 14:01:00.504232] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:33.521 [2024-07-26 14:01:00.504281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.521 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.521 [2024-07-26 14:01:00.561465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.521 [2024-07-26 14:01:00.639172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.521 [2024-07-26 14:01:00.639205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.521 [2024-07-26 14:01:00.639212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.521 [2024-07-26 14:01:00.639217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.521 [2024-07-26 14:01:00.639222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.521 [2024-07-26 14:01:00.639255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:34.092 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.092 [2024-07-26 14:01:01.482611] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.092 [2024-07-26 14:01:01.498624] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.092 [2024-07-26 14:01:01.498787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.092 [2024-07-26 14:01:01.526874] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:34.353 malloc0 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2999726 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2999726 /var/tmp/bdevperf.sock 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2999726 ']' 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.353 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:34.353 [2024-07-26 14:01:01.607280] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:19:34.353 [2024-07-26 14:01:01.607330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999726 ] 00:19:34.353 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.353 [2024-07-26 14:01:01.658572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.353 [2024-07-26 14:01:01.732680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.293 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.293 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:35.293 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:35.293 [2024-07-26 14:01:02.555847] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.293 [2024-07-26 14:01:02.555936] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:35.293 TLSTESTn1 00:19:35.293 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.552 Running I/O for 10 seconds... 00:19:45.540 00:19:45.540 Latency(us) 00:19:45.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.540 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.540 Verification LBA range: start 0x0 length 0x2000 00:19:45.540 TLSTESTn1 : 10.09 1067.11 4.17 0.00 0.00 119526.50 7180.47 176890.21 00:19:45.540 =================================================================================================================== 00:19:45.540 Total : 1067.11 4.17 0.00 0.00 119526.50 7180.47 176890.21 00:19:45.540 0 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:45.540 nvmf_trace.0 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2999726 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2999726 ']' 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2999726 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.540 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2999726 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2999726' 00:19:45.800 killing process with pid 2999726 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2999726 00:19:45.800 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.800 00:19:45.800 Latency(us) 00:19:45.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.800 =================================================================================================================== 00:19:45.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.800 [2024-07-26 14:01:13.007451] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2999726 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.800 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.800 rmmod nvme_tcp 00:19:45.800 rmmod nvme_fabrics 00:19:46.060 rmmod nvme_keyring 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2999587 ']' 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2999587 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2999587 ']' 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2999587 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2999587 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2999587' 00:19:46.060 killing process with pid 2999587 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2999587 00:19:46.060 [2024-07-26 14:01:13.297679] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2999587 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.060 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:48.603 00:19:48.603 real 0m20.400s 00:19:48.603 user 0m23.442s 00:19:48.603 sys 0m7.825s 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.603 ************************************ 00:19:48.603 END TEST nvmf_fips 00:19:48.603 ************************************ 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.603 14:01:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.887 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:53.888 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:53.888 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:53.888 Found net devices under 0000:86:00.0: cvl_0_0 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:53.888 Found net devices under 0000:86:00.1: cvl_0_1 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.888 ************************************ 00:19:53.888 START TEST nvmf_perf_adq 00:19:53.888 ************************************ 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:53.888 * Looking for test storage... 00:19:53.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.888 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.169 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:59.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:59.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.170 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:59.170 Found net devices under 0000:86:00.0: cvl_0_0 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:59.170 Found net devices under 0000:86:00.1: cvl_0_1 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:59.170 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:59.740 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:01.654 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.005 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.006 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.006 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.006 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.006 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.006 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:07.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:20:07.006 00:20:07.006 --- 10.0.0.2 ping statistics --- 00:20:07.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.006 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:20:07.006 00:20:07.006 --- 10.0.0.1 ping statistics --- 00:20:07.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.006 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.006 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3009582 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3009582 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3009582 ']' 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.007 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 [2024-07-26 14:01:34.125411] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:20:07.007 [2024-07-26 14:01:34.125454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.007 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.007 [2024-07-26 14:01:34.179913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.007 [2024-07-26 14:01:34.260821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.007 [2024-07-26 14:01:34.260858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.007 [2024-07-26 14:01:34.260866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.007 [2024-07-26 14:01:34.260872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.007 [2024-07-26 14:01:34.260877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.007 [2024-07-26 14:01:34.260920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.007 [2024-07-26 14:01:34.260948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.007 [2024-07-26 14:01:34.261033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.007 [2024-07-26 14:01:34.261034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.577 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 [2024-07-26 14:01:35.122327] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 Malloc1 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 [2024-07-26 14:01:35.173995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3009682 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:07.837 14:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:07.837 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:10.376 "tick_rate": 2300000000, 00:20:10.376 "poll_groups": [ 00:20:10.376 { 00:20:10.376 "name": "nvmf_tgt_poll_group_000", 00:20:10.376 "admin_qpairs": 1, 00:20:10.376 "io_qpairs": 1, 00:20:10.376 "current_admin_qpairs": 1, 00:20:10.376 "current_io_qpairs": 1, 00:20:10.376 "pending_bdev_io": 0, 00:20:10.376 "completed_nvme_io": 18629, 00:20:10.376 "transports": [ 00:20:10.376 { 00:20:10.376 "trtype": "TCP" 00:20:10.376 } 00:20:10.376 ] 00:20:10.376 }, 00:20:10.376 { 00:20:10.376 "name": "nvmf_tgt_poll_group_001", 00:20:10.376 "admin_qpairs": 0, 00:20:10.376 "io_qpairs": 1, 00:20:10.376 "current_admin_qpairs": 0, 00:20:10.376 "current_io_qpairs": 1, 00:20:10.376 "pending_bdev_io": 0, 00:20:10.376 "completed_nvme_io": 18722, 00:20:10.376 "transports": [ 00:20:10.376 { 00:20:10.376 "trtype": "TCP" 00:20:10.376 } 00:20:10.376 ] 00:20:10.376 }, 00:20:10.376 { 00:20:10.376 "name": "nvmf_tgt_poll_group_002", 00:20:10.376 "admin_qpairs": 0, 00:20:10.376 "io_qpairs": 1, 00:20:10.376 "current_admin_qpairs": 0, 00:20:10.376 "current_io_qpairs": 1, 00:20:10.376 "pending_bdev_io": 0, 00:20:10.376 "completed_nvme_io": 18522, 00:20:10.376 "transports": [ 00:20:10.376 { 00:20:10.376 "trtype": "TCP" 00:20:10.376 } 00:20:10.376 ] 00:20:10.376 }, 00:20:10.376 { 00:20:10.376 "name": "nvmf_tgt_poll_group_003", 00:20:10.376 "admin_qpairs": 0, 00:20:10.376 "io_qpairs": 1, 00:20:10.376 "current_admin_qpairs": 0, 00:20:10.376 "current_io_qpairs": 1, 00:20:10.376 "pending_bdev_io": 0, 00:20:10.376 "completed_nvme_io": 18219, 00:20:10.376 "transports": [ 00:20:10.376 { 00:20:10.376 "trtype": "TCP" 00:20:10.376 } 00:20:10.376 ] 00:20:10.376 } 00:20:10.376 ] 00:20:10.376 }' 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:10.376 14:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3009682 00:20:18.507 Initializing NVMe Controllers 00:20:18.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:18.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:18.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:18.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:18.507 Initialization complete. Launching workers. 00:20:18.507 ======================================================== 00:20:18.507 Latency(us) 00:20:18.507 Device Information : IOPS MiB/s Average min max 00:20:18.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10240.93 40.00 6262.88 1635.56 46030.52 00:20:18.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10791.92 42.16 5931.45 1481.74 19171.18 00:20:18.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10400.43 40.63 6153.34 1570.21 12249.93 00:20:18.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10772.32 42.08 5941.39 1596.26 12396.33 00:20:18.507 ======================================================== 00:20:18.507 Total : 42205.60 164.87 6069.09 1481.74 46030.52 00:20:18.507 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.507 rmmod nvme_tcp 00:20:18.507 rmmod nvme_fabrics 00:20:18.507 rmmod nvme_keyring 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3009582 ']' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3009582 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3009582 ']' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3009582 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3009582 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3009582' 00:20:18.507 killing process with pid 3009582 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3009582 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3009582 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.507 14:01:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.049 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.049 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:21.049 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:21.620 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:23.531 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:28.815 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.816 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.816 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.816 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.816 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:20:28.816 00:20:28.816 --- 10.0.0.2 ping statistics --- 00:20:28.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.817 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:20:28.817 00:20:28.817 --- 10.0.0.1 ping statistics --- 00:20:28.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.817 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:28.817 net.core.busy_poll = 1 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:28.817 net.core.busy_read = 1 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:28.817 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3013468 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3013468 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3013468 ']' 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.077 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.077 [2024-07-26 14:01:56.321887] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:20:29.077 [2024-07-26 14:01:56.321937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.077 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.077 [2024-07-26 14:01:56.381806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.077 [2024-07-26 14:01:56.463481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.077 [2024-07-26 14:01:56.463518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.077 [2024-07-26 14:01:56.463525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.077 [2024-07-26 14:01:56.463530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.077 [2024-07-26 14:01:56.463535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.077 [2024-07-26 14:01:56.463578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.077 [2024-07-26 14:01:56.463678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.077 [2024-07-26 14:01:56.463753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.077 [2024-07-26 14:01:56.463754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 [2024-07-26 14:01:57.318817] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 Malloc1 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 [2024-07-26 14:01:57.366259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.016 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.016 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3013708 00:20:30.016 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:30.016 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:30.016 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:32.550 "tick_rate": 2300000000, 00:20:32.550 "poll_groups": [ 00:20:32.550 { 00:20:32.550 "name": "nvmf_tgt_poll_group_000", 00:20:32.550 "admin_qpairs": 1, 00:20:32.550 "io_qpairs": 2, 00:20:32.550 "current_admin_qpairs": 1, 00:20:32.550 "current_io_qpairs": 2, 00:20:32.550 "pending_bdev_io": 0, 00:20:32.550 "completed_nvme_io": 25944, 00:20:32.550 "transports": [ 00:20:32.550 { 00:20:32.550 "trtype": "TCP" 00:20:32.550 } 00:20:32.550 ] 00:20:32.550 }, 00:20:32.550 { 00:20:32.550 "name": "nvmf_tgt_poll_group_001", 00:20:32.550 "admin_qpairs": 0, 00:20:32.550 "io_qpairs": 2, 00:20:32.550 "current_admin_qpairs": 0, 00:20:32.550 "current_io_qpairs": 2, 00:20:32.550 "pending_bdev_io": 0, 00:20:32.550 "completed_nvme_io": 27626, 00:20:32.550 "transports": [ 00:20:32.550 { 00:20:32.550 "trtype": "TCP" 00:20:32.550 } 00:20:32.550 ] 00:20:32.550 }, 00:20:32.550 { 00:20:32.550 "name": "nvmf_tgt_poll_group_002", 00:20:32.550 "admin_qpairs": 0, 00:20:32.550 "io_qpairs": 0, 00:20:32.550 "current_admin_qpairs": 0, 00:20:32.550 "current_io_qpairs": 0, 00:20:32.550 "pending_bdev_io": 0, 00:20:32.550 "completed_nvme_io": 0, 00:20:32.550 "transports": [ 00:20:32.550 { 00:20:32.550 "trtype": "TCP" 00:20:32.550 } 00:20:32.550 ] 00:20:32.550 }, 00:20:32.550 { 00:20:32.550 "name": "nvmf_tgt_poll_group_003", 00:20:32.550 "admin_qpairs": 0, 00:20:32.550 "io_qpairs": 0, 00:20:32.550 "current_admin_qpairs": 0, 00:20:32.550 "current_io_qpairs": 0, 00:20:32.550 "pending_bdev_io": 0, 00:20:32.550 "completed_nvme_io": 0, 00:20:32.550 "transports": [ 00:20:32.550 { 00:20:32.550 "trtype": "TCP" 00:20:32.550 } 00:20:32.550 ] 00:20:32.550 } 00:20:32.550 ] 00:20:32.550 }' 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:32.550 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3013708 00:20:40.751 Initializing NVMe Controllers 00:20:40.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:40.751 Initialization complete. Launching workers. 00:20:40.751 ======================================================== 00:20:40.751 Latency(us) 00:20:40.751 Device Information : IOPS MiB/s Average min max 00:20:40.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6935.50 27.09 9259.28 1891.89 55028.18 00:20:40.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6412.60 25.05 9981.91 1878.41 54464.52 00:20:40.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7618.70 29.76 8416.17 1688.48 55322.27 00:20:40.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7622.90 29.78 8412.68 1687.55 55810.82 00:20:40.751 ======================================================== 00:20:40.751 Total : 28589.69 111.68 8970.96 1687.55 55810.82 00:20:40.751 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.751 rmmod nvme_tcp 00:20:40.751 rmmod nvme_fabrics 00:20:40.751 rmmod nvme_keyring 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3013468 ']' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3013468 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3013468 ']' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3013468 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3013468 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3013468' 00:20:40.751 killing process with pid 3013468 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3013468 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3013468 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.751 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:44.050 00:20:44.050 real 0m50.155s 00:20:44.050 user 2m49.884s 00:20:44.050 sys 0m9.648s 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.050 ************************************ 00:20:44.050 END TEST nvmf_perf_adq 00:20:44.050 ************************************ 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.050 14:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.050 ************************************ 00:20:44.050 START TEST nvmf_shutdown 00:20:44.050 ************************************ 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:44.050 * Looking for test storage... 00:20:44.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.050 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:44.051 ************************************ 00:20:44.051 START TEST nvmf_shutdown_tc1 00:20:44.051 ************************************ 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.051 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:49.338 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:49.338 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:49.338 Found net devices under 0000:86:00.0: cvl_0_0 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:49.338 Found net devices under 0000:86:00.1: cvl_0_1 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.338 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.339 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:20:49.600 00:20:49.600 --- 10.0.0.2 ping statistics --- 00:20:49.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.600 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:20:49.600 00:20:49.600 --- 10.0.0.1 ping statistics --- 00:20:49.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.600 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.600 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3019148 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3019148 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3019148 ']' 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.601 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:49.601 [2024-07-26 14:02:16.871968] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:20:49.601 [2024-07-26 14:02:16.872010] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.601 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.601 [2024-07-26 14:02:16.928290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.601 [2024-07-26 14:02:17.006801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.601 [2024-07-26 14:02:17.006841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.601 [2024-07-26 14:02:17.006848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.601 [2024-07-26 14:02:17.006854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.601 [2024-07-26 14:02:17.006858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.601 [2024-07-26 14:02:17.006973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.601 [2024-07-26 14:02:17.007072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.601 [2024-07-26 14:02:17.007201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.601 [2024-07-26 14:02:17.007202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.542 [2024-07-26 14:02:17.743366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.542 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.543 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.543 Malloc1 00:20:50.543 [2024-07-26 14:02:17.839405] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.543 Malloc2 00:20:50.543 Malloc3 00:20:50.543 Malloc4 00:20:50.803 Malloc5 00:20:50.803 Malloc6 00:20:50.803 Malloc7 00:20:50.803 Malloc8 00:20:50.803 Malloc9 00:20:50.803 Malloc10 00:20:50.803 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.803 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:50.803 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.803 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3019432 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3019432 /var/tmp/bdevperf.sock 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3019432 ']' 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.064 { 00:20:51.064 "params": { 00:20:51.064 "name": "Nvme$subsystem", 00:20:51.064 "trtype": "$TEST_TRANSPORT", 00:20:51.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.064 "adrfam": "ipv4", 00:20:51.064 "trsvcid": "$NVMF_PORT", 00:20:51.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.064 "hdgst": ${hdgst:-false}, 00:20:51.064 "ddgst": ${ddgst:-false} 00:20:51.064 }, 00:20:51.064 "method": "bdev_nvme_attach_controller" 00:20:51.064 } 00:20:51.064 EOF 00:20:51.064 )") 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.064 { 00:20:51.064 "params": { 00:20:51.064 "name": "Nvme$subsystem", 00:20:51.064 "trtype": "$TEST_TRANSPORT", 00:20:51.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.064 "adrfam": "ipv4", 00:20:51.064 "trsvcid": "$NVMF_PORT", 00:20:51.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.064 "hdgst": ${hdgst:-false}, 00:20:51.064 "ddgst": ${ddgst:-false} 00:20:51.064 }, 00:20:51.064 "method": "bdev_nvme_attach_controller" 00:20:51.064 } 00:20:51.064 EOF 00:20:51.064 )") 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.064 { 00:20:51.064 "params": { 00:20:51.064 "name": "Nvme$subsystem", 00:20:51.064 "trtype": "$TEST_TRANSPORT", 00:20:51.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.064 "adrfam": "ipv4", 00:20:51.064 "trsvcid": "$NVMF_PORT", 00:20:51.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.064 "hdgst": ${hdgst:-false}, 00:20:51.064 "ddgst": ${ddgst:-false} 00:20:51.064 }, 00:20:51.064 "method": "bdev_nvme_attach_controller" 00:20:51.064 } 00:20:51.064 EOF 00:20:51.064 )") 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.064 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 [2024-07-26 14:02:18.307293] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:20:51.065 [2024-07-26 14:02:18.307350] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:51.065 { 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme$subsystem", 00:20:51.065 "trtype": "$TEST_TRANSPORT", 00:20:51.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "$NVMF_PORT", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.065 "hdgst": ${hdgst:-false}, 00:20:51.065 "ddgst": ${ddgst:-false} 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 } 00:20:51.065 EOF 00:20:51.065 )") 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:51.065 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:51.065 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme1", 00:20:51.065 "trtype": "tcp", 00:20:51.065 "traddr": "10.0.0.2", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "4420", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.065 "hdgst": false, 00:20:51.065 "ddgst": false 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 },{ 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme2", 00:20:51.065 "trtype": "tcp", 00:20:51.065 "traddr": "10.0.0.2", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "4420", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.065 "hdgst": false, 00:20:51.065 "ddgst": false 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 },{ 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme3", 00:20:51.065 "trtype": "tcp", 00:20:51.065 "traddr": "10.0.0.2", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "4420", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:51.065 "hdgst": false, 00:20:51.065 "ddgst": false 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 },{ 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme4", 00:20:51.065 "trtype": "tcp", 00:20:51.065 "traddr": "10.0.0.2", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "4420", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:51.065 "hdgst": false, 00:20:51.065 "ddgst": false 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 },{ 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme5", 00:20:51.065 "trtype": "tcp", 00:20:51.065 "traddr": "10.0.0.2", 00:20:51.065 "adrfam": "ipv4", 00:20:51.065 "trsvcid": "4420", 00:20:51.065 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:51.065 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:51.065 "hdgst": false, 00:20:51.065 "ddgst": false 00:20:51.065 }, 00:20:51.065 "method": "bdev_nvme_attach_controller" 00:20:51.065 },{ 00:20:51.065 "params": { 00:20:51.065 "name": "Nvme6", 00:20:51.065 "trtype": "tcp", 00:20:51.065 "traddr": "10.0.0.2", 00:20:51.066 "adrfam": "ipv4", 00:20:51.066 "trsvcid": "4420", 00:20:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:51.066 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:51.066 "hdgst": false, 00:20:51.066 "ddgst": false 00:20:51.066 }, 00:20:51.066 "method": "bdev_nvme_attach_controller" 00:20:51.066 },{ 00:20:51.066 "params": { 00:20:51.066 "name": "Nvme7", 00:20:51.066 "trtype": "tcp", 00:20:51.066 "traddr": "10.0.0.2", 00:20:51.066 "adrfam": "ipv4", 00:20:51.066 "trsvcid": "4420", 00:20:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:51.066 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:51.066 "hdgst": false, 00:20:51.066 "ddgst": false 00:20:51.066 }, 00:20:51.066 "method": "bdev_nvme_attach_controller" 00:20:51.066 },{ 00:20:51.066 "params": { 00:20:51.066 "name": "Nvme8", 00:20:51.066 "trtype": "tcp", 00:20:51.066 "traddr": "10.0.0.2", 00:20:51.066 "adrfam": "ipv4", 00:20:51.066 "trsvcid": "4420", 00:20:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:51.066 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:51.066 "hdgst": false, 00:20:51.066 "ddgst": false 00:20:51.066 }, 00:20:51.066 "method": "bdev_nvme_attach_controller" 00:20:51.066 },{ 00:20:51.066 "params": { 00:20:51.066 "name": "Nvme9", 00:20:51.066 "trtype": "tcp", 00:20:51.066 "traddr": "10.0.0.2", 00:20:51.066 "adrfam": "ipv4", 00:20:51.066 "trsvcid": "4420", 00:20:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:51.066 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:51.066 "hdgst": false, 00:20:51.066 "ddgst": false 00:20:51.066 }, 00:20:51.066 "method": "bdev_nvme_attach_controller" 00:20:51.066 },{ 00:20:51.066 "params": { 00:20:51.066 "name": "Nvme10", 00:20:51.066 "trtype": "tcp", 00:20:51.066 "traddr": "10.0.0.2", 00:20:51.066 "adrfam": "ipv4", 00:20:51.066 "trsvcid": "4420", 00:20:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:51.066 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:51.066 "hdgst": false, 00:20:51.066 "ddgst": false 00:20:51.066 }, 00:20:51.066 "method": "bdev_nvme_attach_controller" 00:20:51.066 }' 00:20:51.066 [2024-07-26 14:02:18.363119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.066 [2024-07-26 14:02:18.436678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.447 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3019432 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:52.448 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:53.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3019432 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3019148 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 [2024-07-26 14:02:20.870799] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:20:53.832 [2024-07-26 14:02:20.870850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019910 ] 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.832 "adrfam": "ipv4", 00:20:53.832 "trsvcid": "$NVMF_PORT", 00:20:53.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.832 "hdgst": ${hdgst:-false}, 00:20:53.832 "ddgst": ${ddgst:-false} 00:20:53.832 }, 00:20:53.832 "method": "bdev_nvme_attach_controller" 00:20:53.832 } 00:20:53.832 EOF 00:20:53.832 )") 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.832 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.832 { 00:20:53.832 "params": { 00:20:53.832 "name": "Nvme$subsystem", 00:20:53.832 "trtype": "$TEST_TRANSPORT", 00:20:53.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "$NVMF_PORT", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.833 "hdgst": ${hdgst:-false}, 00:20:53.833 "ddgst": ${ddgst:-false} 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 } 00:20:53.833 EOF 00:20:53.833 )") 00:20:53.833 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:53.833 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:53.833 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.833 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:53.833 14:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme1", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme2", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme3", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme4", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme5", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme6", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme7", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme8", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme9", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 },{ 00:20:53.833 "params": { 00:20:53.833 "name": "Nvme10", 00:20:53.833 "trtype": "tcp", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "adrfam": "ipv4", 00:20:53.833 "trsvcid": "4420", 00:20:53.833 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.833 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.833 "hdgst": false, 00:20:53.833 "ddgst": false 00:20:53.833 }, 00:20:53.833 "method": "bdev_nvme_attach_controller" 00:20:53.833 }' 00:20:53.833 [2024-07-26 14:02:20.927920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.833 [2024-07-26 14:02:21.002035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.218 Running I/O for 1 seconds... 00:20:56.600 00:20:56.600 Latency(us) 00:20:56.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.600 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme1n1 : 1.17 273.81 17.11 0.00 0.00 230547.10 21655.37 221568.67 00:20:56.600 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme2n1 : 1.11 229.79 14.36 0.00 0.00 272111.75 20971.52 242540.19 00:20:56.600 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme3n1 : 1.16 275.12 17.19 0.00 0.00 224056.90 21427.42 224304.08 00:20:56.600 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme4n1 : 1.17 273.17 17.07 0.00 0.00 222659.09 21655.37 227039.50 00:20:56.600 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme5n1 : 1.19 269.01 16.81 0.00 0.00 223248.21 21883.33 216097.84 00:20:56.600 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme6n1 : 1.18 272.34 17.02 0.00 0.00 217131.45 29405.72 231598.53 00:20:56.600 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme7n1 : 1.18 271.71 16.98 0.00 0.00 214566.29 21541.40 224304.08 00:20:56.600 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme8n1 : 1.19 267.93 16.75 0.00 0.00 214426.05 10200.82 240716.58 00:20:56.600 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme9n1 : 1.15 225.45 14.09 0.00 0.00 249344.42 4644.51 240716.58 00:20:56.600 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.600 Verification LBA range: start 0x0 length 0x400 00:20:56.600 Nvme10n1 : 1.25 204.86 12.80 0.00 0.00 264611.62 16412.49 319131.83 00:20:56.601 =================================================================================================================== 00:20:56.601 Total : 2563.19 160.20 0.00 0.00 231458.82 4644.51 319131.83 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.601 rmmod nvme_tcp 00:20:56.601 rmmod nvme_fabrics 00:20:56.601 rmmod nvme_keyring 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3019148 ']' 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3019148 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3019148 ']' 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3019148 00:20:56.601 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:20:56.601 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.601 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019148 00:20:56.861 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:56.861 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:56.861 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019148' 00:20:56.861 killing process with pid 3019148 00:20:56.861 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3019148 00:20:56.861 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3019148 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.120 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.121 14:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.034 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.034 00:20:59.034 real 0m15.284s 00:20:59.034 user 0m35.123s 00:20:59.034 sys 0m5.639s 00:20:59.034 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.034 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.034 ************************************ 00:20:59.034 END TEST nvmf_shutdown_tc1 00:20:59.034 ************************************ 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:59.296 ************************************ 00:20:59.296 START TEST nvmf_shutdown_tc2 00:20:59.296 ************************************ 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.296 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.296 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.297 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.297 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.297 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:59.558 00:20:59.558 --- 10.0.0.2 ping statistics --- 00:20:59.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.558 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:20:59.558 00:20:59.558 --- 10.0.0.1 ping statistics --- 00:20:59.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.558 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3020936 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3020936 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3020936 ']' 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:59.558 14:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.558 [2024-07-26 14:02:26.858361] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:20:59.558 [2024-07-26 14:02:26.858404] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.558 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.558 [2024-07-26 14:02:26.915631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.818 [2024-07-26 14:02:26.998613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.818 [2024-07-26 14:02:26.998649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.818 [2024-07-26 14:02:26.998660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.818 [2024-07-26 14:02:26.998665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.818 [2024-07-26 14:02:26.998670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.818 [2024-07-26 14:02:26.998781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.818 [2024-07-26 14:02:26.998876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.818 [2024-07-26 14:02:26.998984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.818 [2024-07-26 14:02:26.998985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.389 [2024-07-26 14:02:27.705249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.389 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.389 Malloc1 00:21:00.389 [2024-07-26 14:02:27.800809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.389 Malloc2 00:21:00.649 Malloc3 00:21:00.649 Malloc4 00:21:00.649 Malloc5 00:21:00.649 Malloc6 00:21:00.649 Malloc7 00:21:00.649 Malloc8 00:21:00.913 Malloc9 00:21:00.913 Malloc10 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3021221 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3021221 /var/tmp/bdevperf.sock 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3021221 ']' 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.913 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.913 { 00:21:00.913 "params": { 00:21:00.913 "name": "Nvme$subsystem", 00:21:00.913 "trtype": "$TEST_TRANSPORT", 00:21:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.913 "adrfam": "ipv4", 00:21:00.913 "trsvcid": "$NVMF_PORT", 00:21:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.913 "hdgst": ${hdgst:-false}, 00:21:00.913 "ddgst": ${ddgst:-false} 00:21:00.913 }, 00:21:00.913 "method": "bdev_nvme_attach_controller" 00:21:00.913 } 00:21:00.913 EOF 00:21:00.913 )") 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.914 [2024-07-26 14:02:28.267147] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:00.914 [2024-07-26 14:02:28.267195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021221 ] 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.914 { 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme$subsystem", 00:21:00.914 "trtype": "$TEST_TRANSPORT", 00:21:00.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "$NVMF_PORT", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.914 "hdgst": ${hdgst:-false}, 00:21:00.914 "ddgst": ${ddgst:-false} 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 } 00:21:00.914 EOF 00:21:00.914 )") 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.914 { 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme$subsystem", 00:21:00.914 "trtype": "$TEST_TRANSPORT", 00:21:00.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "$NVMF_PORT", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.914 "hdgst": ${hdgst:-false}, 00:21:00.914 "ddgst": ${ddgst:-false} 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 } 00:21:00.914 EOF 00:21:00.914 )") 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.914 { 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme$subsystem", 00:21:00.914 "trtype": "$TEST_TRANSPORT", 00:21:00.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "$NVMF_PORT", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.914 "hdgst": ${hdgst:-false}, 00:21:00.914 "ddgst": ${ddgst:-false} 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 } 00:21:00.914 EOF 00:21:00.914 )") 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:00.914 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:00.914 14:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme1", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme2", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme3", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme4", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme5", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme6", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme7", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme8", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme9", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 },{ 00:21:00.914 "params": { 00:21:00.914 "name": "Nvme10", 00:21:00.914 "trtype": "tcp", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "adrfam": "ipv4", 00:21:00.914 "trsvcid": "4420", 00:21:00.914 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:00.914 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:00.914 "hdgst": false, 00:21:00.914 "ddgst": false 00:21:00.914 }, 00:21:00.914 "method": "bdev_nvme_attach_controller" 00:21:00.914 }' 00:21:00.914 [2024-07-26 14:02:28.323648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.211 [2024-07-26 14:02:28.405945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.593 Running I/O for 10 seconds... 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:02.593 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:02.854 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.114 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.374 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3021221 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3021221 ']' 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3021221 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3021221 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3021221' 00:21:03.375 killing process with pid 3021221 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3021221 00:21:03.375 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3021221 00:21:03.375 Received shutdown signal, test time was about 0.944120 seconds 00:21:03.375 00:21:03.375 Latency(us) 00:21:03.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.375 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme1n1 : 0.94 270.28 16.89 0.00 0.00 233987.32 22567.18 231598.53 00:21:03.375 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme2n1 : 0.90 283.91 17.74 0.00 0.00 218953.91 21199.47 215186.03 00:21:03.375 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme3n1 : 0.93 273.86 17.12 0.00 0.00 223267.84 20743.57 229774.91 00:21:03.375 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme4n1 : 0.91 282.00 17.62 0.00 0.00 212598.21 23137.06 227039.50 00:21:03.375 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme5n1 : 0.92 278.94 17.43 0.00 0.00 210867.42 21427.42 211538.81 00:21:03.375 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme6n1 : 0.92 278.72 17.42 0.00 0.00 207072.83 23820.91 228863.11 00:21:03.375 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme7n1 : 0.92 208.88 13.06 0.00 0.00 270722.45 41943.04 260776.29 00:21:03.375 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme8n1 : 0.93 206.69 12.92 0.00 0.00 269557.17 24618.74 286306.84 00:21:03.375 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme9n1 : 0.90 213.96 13.37 0.00 0.00 253782.52 21655.37 238892.97 00:21:03.375 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.375 Verification LBA range: start 0x0 length 0x400 00:21:03.375 Nvme10n1 : 0.88 217.53 13.60 0.00 0.00 243665.99 22225.25 225215.89 00:21:03.375 =================================================================================================================== 00:21:03.375 Total : 2514.77 157.17 0.00 0.00 231670.51 20743.57 286306.84 00:21:03.635 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3020936 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.572 rmmod nvme_tcp 00:21:04.572 rmmod nvme_fabrics 00:21:04.572 rmmod nvme_keyring 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3020936 ']' 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3020936 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3020936 ']' 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3020936 00:21:04.572 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:04.573 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:04.573 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3020936 00:21:04.832 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:04.832 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:04.832 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3020936' 00:21:04.832 killing process with pid 3020936 00:21:04.832 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3020936 00:21:04.832 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3020936 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.093 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.640 00:21:07.640 real 0m7.934s 00:21:07.640 user 0m24.071s 00:21:07.640 sys 0m1.333s 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 ************************************ 00:21:07.640 END TEST nvmf_shutdown_tc2 00:21:07.640 ************************************ 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 ************************************ 00:21:07.640 START TEST nvmf_shutdown_tc3 00:21:07.640 ************************************ 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.640 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.641 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.641 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.641 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.641 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:07.641 00:21:07.641 --- 10.0.0.2 ping statistics --- 00:21:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.641 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:21:07.641 00:21:07.641 --- 10.0.0.1 ping statistics --- 00:21:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.641 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3022486 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3022486 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3022486 ']' 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.641 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.642 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.642 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.642 [2024-07-26 14:02:34.933900] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:07.642 [2024-07-26 14:02:34.933945] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.642 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.642 [2024-07-26 14:02:34.991130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.642 [2024-07-26 14:02:35.063849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.642 [2024-07-26 14:02:35.063891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.642 [2024-07-26 14:02:35.063897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.642 [2024-07-26 14:02:35.063903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.642 [2024-07-26 14:02:35.063908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.642 [2024-07-26 14:02:35.064031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.642 [2024-07-26 14:02:35.064130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.642 [2024-07-26 14:02:35.064215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.642 [2024-07-26 14:02:35.064217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.583 [2024-07-26 14:02:35.773456] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.583 14:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.583 Malloc1 00:21:08.583 [2024-07-26 14:02:35.869168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.583 Malloc2 00:21:08.583 Malloc3 00:21:08.583 Malloc4 00:21:08.583 Malloc5 00:21:08.843 Malloc6 00:21:08.843 Malloc7 00:21:08.843 Malloc8 00:21:08.843 Malloc9 00:21:08.843 Malloc10 00:21:08.843 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.843 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:08.844 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.844 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3022772 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3022772 /var/tmp/bdevperf.sock 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3022772 ']' 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.104 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 [2024-07-26 14:02:36.350976] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:09.105 [2024-07-26 14:02:36.351027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022772 ] 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.105 { 00:21:09.105 "params": { 00:21:09.105 "name": "Nvme$subsystem", 00:21:09.105 "trtype": "$TEST_TRANSPORT", 00:21:09.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.105 "adrfam": "ipv4", 00:21:09.105 "trsvcid": "$NVMF_PORT", 00:21:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.105 "hdgst": ${hdgst:-false}, 00:21:09.105 "ddgst": ${ddgst:-false} 00:21:09.105 }, 00:21:09.105 "method": "bdev_nvme_attach_controller" 00:21:09.105 } 00:21:09.105 EOF 00:21:09.105 )") 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:09.105 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:09.106 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme1", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme2", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme3", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme4", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme5", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme6", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme7", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme8", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme9", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 },{ 00:21:09.106 "params": { 00:21:09.106 "name": "Nvme10", 00:21:09.106 "trtype": "tcp", 00:21:09.106 "traddr": "10.0.0.2", 00:21:09.106 "adrfam": "ipv4", 00:21:09.106 "trsvcid": "4420", 00:21:09.106 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:09.106 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:09.106 "hdgst": false, 00:21:09.106 "ddgst": false 00:21:09.106 }, 00:21:09.106 "method": "bdev_nvme_attach_controller" 00:21:09.106 }' 00:21:09.106 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.106 [2024-07-26 14:02:36.407545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.106 [2024-07-26 14:02:36.480646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.016 Running I/O for 10 seconds... 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3022486 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3022486 ']' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3022486 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3022486 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3022486' 00:21:11.593 killing process with pid 3022486 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3022486 00:21:11.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3022486 00:21:11.593 [2024-07-26 14:02:38.994097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.593 [2024-07-26 14:02:38.994462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.994553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7180 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:38.999953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9300 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:39.000831] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:11.594 [2024-07-26 14:02:39.001867] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:11.594 [2024-07-26 14:02:39.003407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.594 [2024-07-26 14:02:39.003428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.003805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7640 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.004995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.595 [2024-07-26 14:02:39.005098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.005352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7b00 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.008055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379ee0 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.008190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234dc70 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.008274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250a860 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.008361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.596 [2024-07-26 14:02:39.008415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.596 [2024-07-26 14:02:39.008421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519910 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.011286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.011315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.011322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.596 [2024-07-26 14:02:39.011328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.011687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7fe0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.012334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a84a0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.012356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a84a0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.012858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494230 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.013661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.013676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.013682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.013688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.597 [2024-07-26 14:02:39.013695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.013999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24946f0 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.598 [2024-07-26 14:02:39.014968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.014977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.014982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.014988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.014994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.014999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.015174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8960 is same with the state(5) to be set 00:21:11.599 [2024-07-26 14:02:39.018897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.018919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.018934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.018941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.018950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.018956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.018964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.018979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.018986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.599 [2024-07-26 14:02:39.019261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.599 [2024-07-26 14:02:39.019269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.600 [2024-07-26 14:02:39.019843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.600 [2024-07-26 14:02:39.019849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.019857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.019864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.019928] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2349970 was disconnected and freed. reset controller. 00:21:11.601 [2024-07-26 14:02:39.020355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.601 [2024-07-26 14:02:39.020911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.601 [2024-07-26 14:02:39.020919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.020925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.020933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.020940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.020948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.020954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.020962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.020968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.020976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.020983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.020990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.020997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.602 [2024-07-26 14:02:39.021315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:11.602 [2024-07-26 14:02:39.021389] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2499210 was disconnected and freed. reset controller. 00:21:11.602 [2024-07-26 14:02:39.021442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2379ee0 (9): Bad file descriptor 00:21:11.602 [2024-07-26 14:02:39.021473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.602 [2024-07-26 14:02:39.021482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.602 [2024-07-26 14:02:39.021496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.602 [2024-07-26 14:02:39.021510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.602 [2024-07-26 14:02:39.021524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.602 [2024-07-26 14:02:39.021530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9c340 is same with the state(5) to be set 00:21:11.602 [2024-07-26 14:02:39.021556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250aa40 is same with the state(5) to be set 00:21:11.603 [2024-07-26 14:02:39.021634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f1160 is same with the state(5) to be set 00:21:11.603 [2024-07-26 14:02:39.021714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2516ea0 is same with the state(5) to be set 00:21:11.603 [2024-07-26 14:02:39.021785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234dc70 (9): Bad file descriptor 00:21:11.603 [2024-07-26 14:02:39.021807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f1800 is same with the state(5) to be set 00:21:11.603 [2024-07-26 14:02:39.021872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250a860 (9): Bad file descriptor 00:21:11.603 [2024-07-26 14:02:39.021897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.603 [2024-07-26 14:02:39.021905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.603 [2024-07-26 14:02:39.021912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.874 [2024-07-26 14:02:39.028505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.874 [2024-07-26 14:02:39.028528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.874 [2024-07-26 14:02:39.028542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370a50 is same with the state(5) to be set 00:21:11.874 [2024-07-26 14:02:39.028562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2519910 (9): Bad file descriptor 00:21:11.874 [2024-07-26 14:02:39.028694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.028991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.028999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.029006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.029014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.029020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.029028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.874 [2024-07-26 14:02:39.029035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.874 [2024-07-26 14:02:39.029048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.875 [2024-07-26 14:02:39.029611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.875 [2024-07-26 14:02:39.029619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029702] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24a3700 was disconnected and freed. reset controller. 00:21:11.876 [2024-07-26 14:02:39.029830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.029986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.029992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.876 [2024-07-26 14:02:39.030346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.876 [2024-07-26 14:02:39.030353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.030778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.030837] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2348480 was disconnected and freed. reset controller. 00:21:11.877 [2024-07-26 14:02:39.031965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.031988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.877 [2024-07-26 14:02:39.032162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.877 [2024-07-26 14:02:39.032173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.878 [2024-07-26 14:02:39.032786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.878 [2024-07-26 14:02:39.032795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.032990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.032999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.033135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.033144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.037747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.037838] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2e22cf0 was disconnected and freed. reset controller. 00:21:11.879 [2024-07-26 14:02:39.039103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:11.879 [2024-07-26 14:02:39.039126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f1800 (9): Bad file descriptor 00:21:11.879 [2024-07-26 14:02:39.039158] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.879 [2024-07-26 14:02:39.039173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9c340 (9): Bad file descriptor 00:21:11.879 [2024-07-26 14:02:39.039188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250aa40 (9): Bad file descriptor 00:21:11.879 [2024-07-26 14:02:39.039202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f1160 (9): Bad file descriptor 00:21:11.879 [2024-07-26 14:02:39.039217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2516ea0 (9): Bad file descriptor 00:21:11.879 [2024-07-26 14:02:39.039244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370a50 (9): Bad file descriptor 00:21:11.879 [2024-07-26 14:02:39.043246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:11.879 [2024-07-26 14:02:39.043351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.879 [2024-07-26 14:02:39.043555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.879 [2024-07-26 14:02:39.043565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.043979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.043991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.880 [2024-07-26 14:02:39.044282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.880 [2024-07-26 14:02:39.044294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.044695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.044705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a11e0 is same with the state(5) to be set 00:21:11.881 [2024-07-26 14:02:39.046120] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:11.881 [2024-07-26 14:02:39.046908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:11.881 [2024-07-26 14:02:39.046932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:11.881 [2024-07-26 14:02:39.047425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.881 [2024-07-26 14:02:39.047445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f1800 with addr=10.0.0.2, port=4420 00:21:11.881 [2024-07-26 14:02:39.047456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f1800 is same with the state(5) to be set 00:21:11.881 [2024-07-26 14:02:39.048236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.881 [2024-07-26 14:02:39.048254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250aa40 with addr=10.0.0.2, port=4420 00:21:11.881 [2024-07-26 14:02:39.048265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250aa40 is same with the state(5) to be set 00:21:11.881 [2024-07-26 14:02:39.048326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.881 [2024-07-26 14:02:39.048635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.881 [2024-07-26 14:02:39.048646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.048982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.882 [2024-07-26 14:02:39.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.882 [2024-07-26 14:02:39.049455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.049651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.049661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2270 is same with the state(5) to be set 00:21:11.883 [2024-07-26 14:02:39.051382] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:11.883 [2024-07-26 14:02:39.051761] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:11.883 [2024-07-26 14:02:39.052064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-07-26 14:02:39.052410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.883 [2024-07-26 14:02:39.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.052987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.052994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.053003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.053009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.053018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.884 [2024-07-26 14:02:39.053034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-07-26 14:02:39.053041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.053056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.053063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.053072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.053079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.053088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.053104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.053111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.053119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a6c0 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.057159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:11.885 [2024-07-26 14:02:39.057180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:11.885 [2024-07-26 14:02:39.057191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:11.885 [2024-07-26 14:02:39.057201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:11.885 [2024-07-26 14:02:39.057743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.885 [2024-07-26 14:02:39.057757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2379ee0 with addr=10.0.0.2, port=4420 00:21:11.885 [2024-07-26 14:02:39.057766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379ee0 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.058292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.885 [2024-07-26 14:02:39.058304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9c340 with addr=10.0.0.2, port=4420 00:21:11.885 [2024-07-26 14:02:39.058311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9c340 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.058322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f1800 (9): Bad file descriptor 00:21:11.885 [2024-07-26 14:02:39.058332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250aa40 (9): Bad file descriptor 00:21:11.885 [2024-07-26 14:02:39.058375] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.885 [2024-07-26 14:02:39.058387] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.885 [2024-07-26 14:02:39.058398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9c340 (9): Bad file descriptor 00:21:11.885 [2024-07-26 14:02:39.058410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2379ee0 (9): Bad file descriptor 00:21:11.885 [2024-07-26 14:02:39.059287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.885 [2024-07-26 14:02:39.059306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234dc70 with addr=10.0.0.2, port=4420 00:21:11.885 [2024-07-26 14:02:39.059313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234dc70 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.059800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.885 [2024-07-26 14:02:39.059811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2516ea0 with addr=10.0.0.2, port=4420 00:21:11.885 [2024-07-26 14:02:39.059818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2516ea0 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.060331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.885 [2024-07-26 14:02:39.060342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2519910 with addr=10.0.0.2, port=4420 00:21:11.885 [2024-07-26 14:02:39.060349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519910 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.060799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.885 [2024-07-26 14:02:39.060809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250a860 with addr=10.0.0.2, port=4420 00:21:11.885 [2024-07-26 14:02:39.060817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250a860 is same with the state(5) to be set 00:21:11.885 [2024-07-26 14:02:39.060827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:11.885 [2024-07-26 14:02:39.060833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:11.885 [2024-07-26 14:02:39.060842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:11.885 [2024-07-26 14:02:39.060855] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:11.885 [2024-07-26 14:02:39.060862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:11.885 [2024-07-26 14:02:39.060868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:11.885 [2024-07-26 14:02:39.061189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.885 [2024-07-26 14:02:39.061503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.885 [2024-07-26 14:02:39.061511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.061989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.061997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.886 [2024-07-26 14:02:39.062121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.886 [2024-07-26 14:02:39.062130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.062136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.062146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.062153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.062161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.062168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.062177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.062184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.062192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.062199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.062208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.062215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.062222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(5) to be set 00:21:11.887 [2024-07-26 14:02:39.063605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.063987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.063996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.887 [2024-07-26 14:02:39.064281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.887 [2024-07-26 14:02:39.064290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.888 [2024-07-26 14:02:39.064918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.888 [2024-07-26 14:02:39.064927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.889 [2024-07-26 14:02:39.064937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c7b280 is same with the state(5) to be set 00:21:11.889 [2024-07-26 14:02:39.066908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.066927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.066937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:11.889 task offset: 19712 on job bdev=Nvme6n1 fails 00:21:11.889 00:21:11.889 Latency(us) 00:21:11.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.889 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme1n1 ended in about 0.76 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme1n1 : 0.76 168.28 10.52 84.14 0.00 250606.04 26898.25 246187.41 00:21:11.889 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme2n1 ended in about 0.77 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme2n1 : 0.77 167.19 10.45 83.60 0.00 247015.81 23592.96 242540.19 00:21:11.889 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme3n1 ended in about 0.76 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme3n1 : 0.76 251.58 15.72 84.74 0.00 180061.79 20971.52 220656.86 00:21:11.889 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme4n1 ended in about 0.78 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme4n1 : 0.78 164.51 10.28 82.25 0.00 240646.53 24162.84 235245.75 00:21:11.889 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme5n1 ended in about 0.76 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme5n1 : 0.76 169.20 10.57 84.60 0.00 228141.26 21997.30 231598.53 00:21:11.889 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme6n1 ended in about 0.75 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme6n1 : 0.75 171.42 10.71 85.71 0.00 219615.94 23365.01 257129.07 00:21:11.889 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme7n1 ended in about 0.78 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme7n1 : 0.78 163.93 10.25 81.96 0.00 225708.30 23251.03 246187.41 00:21:11.889 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme8n1 ended in about 0.76 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme8n1 : 0.76 168.90 10.56 84.45 0.00 212667.58 22111.28 229774.91 00:21:11.889 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme9n1 ended in about 0.75 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme9n1 : 0.75 169.81 10.61 84.91 0.00 206055.51 17780.20 242540.19 00:21:11.889 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.889 Job: Nvme10n1 ended in about 0.77 seconds with error 00:21:11.889 Verification LBA range: start 0x0 length 0x400 00:21:11.889 Nvme10n1 : 0.77 166.46 10.40 83.23 0.00 205866.15 29177.77 194214.51 00:21:11.889 =================================================================================================================== 00:21:11.889 Total : 1761.26 110.08 839.58 0.00 220337.91 17780.20 257129.07 00:21:11.889 [2024-07-26 14:02:39.091549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:11.889 [2024-07-26 14:02:39.091582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:11.889 [2024-07-26 14:02:39.091622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234dc70 (9): Bad file descriptor 00:21:11.889 [2024-07-26 14:02:39.091634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2516ea0 (9): Bad file descriptor 00:21:11.889 [2024-07-26 14:02:39.091643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2519910 (9): Bad file descriptor 00:21:11.889 [2024-07-26 14:02:39.091652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250a860 (9): Bad file descriptor 00:21:11.889 [2024-07-26 14:02:39.091660] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.091666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.091674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:11.889 [2024-07-26 14:02:39.091694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.091700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.091706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:11.889 [2024-07-26 14:02:39.091810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.091819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.092418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.889 [2024-07-26 14:02:39.092435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370a50 with addr=10.0.0.2, port=4420 00:21:11.889 [2024-07-26 14:02:39.092444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370a50 is same with the state(5) to be set 00:21:11.889 [2024-07-26 14:02:39.092847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.889 [2024-07-26 14:02:39.092858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f1160 with addr=10.0.0.2, port=4420 00:21:11.889 [2024-07-26 14:02:39.092864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f1160 is same with the state(5) to be set 00:21:11.889 [2024-07-26 14:02:39.092871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.092877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.092884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:11.889 [2024-07-26 14:02:39.092895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.092901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.092907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:11.889 [2024-07-26 14:02:39.092918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.092924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.092930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:11.889 [2024-07-26 14:02:39.092942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.092948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.092955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:11.889 [2024-07-26 14:02:39.092990] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.889 [2024-07-26 14:02:39.093001] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.889 [2024-07-26 14:02:39.093010] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.889 [2024-07-26 14:02:39.093019] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:11.889 [2024-07-26 14:02:39.093519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.093529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.093534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.093543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.093561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370a50 (9): Bad file descriptor 00:21:11.889 [2024-07-26 14:02:39.093571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f1160 (9): Bad file descriptor 00:21:11.889 [2024-07-26 14:02:39.093846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:11.889 [2024-07-26 14:02:39.093860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:11.889 [2024-07-26 14:02:39.093868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:11.889 [2024-07-26 14:02:39.093894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.093900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.093907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:11.889 [2024-07-26 14:02:39.093916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:11.889 [2024-07-26 14:02:39.093922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:11.889 [2024-07-26 14:02:39.093928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:11.889 [2024-07-26 14:02:39.093959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:11.889 [2024-07-26 14:02:39.093976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.093982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.889 [2024-07-26 14:02:39.094551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.890 [2024-07-26 14:02:39.094566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250aa40 with addr=10.0.0.2, port=4420 00:21:11.890 [2024-07-26 14:02:39.094574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250aa40 is same with the state(5) to be set 00:21:11.890 [2024-07-26 14:02:39.095021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.890 [2024-07-26 14:02:39.095032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f1800 with addr=10.0.0.2, port=4420 00:21:11.890 [2024-07-26 14:02:39.095039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f1800 is same with the state(5) to be set 00:21:11.890 [2024-07-26 14:02:39.095543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.890 [2024-07-26 14:02:39.095553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9c340 with addr=10.0.0.2, port=4420 00:21:11.890 [2024-07-26 14:02:39.095560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9c340 is same with the state(5) to be set 00:21:11.890 [2024-07-26 14:02:39.095977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.890 [2024-07-26 14:02:39.095990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2379ee0 with addr=10.0.0.2, port=4420 00:21:11.890 [2024-07-26 14:02:39.095997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379ee0 is same with the state(5) to be set 00:21:11.890 [2024-07-26 14:02:39.096007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250aa40 (9): Bad file descriptor 00:21:11.890 [2024-07-26 14:02:39.096016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f1800 (9): Bad file descriptor 00:21:11.890 [2024-07-26 14:02:39.096024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9c340 (9): Bad file descriptor 00:21:11.890 [2024-07-26 14:02:39.096076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2379ee0 (9): Bad file descriptor 00:21:11.890 [2024-07-26 14:02:39.096086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:11.890 [2024-07-26 14:02:39.096092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:11.890 [2024-07-26 14:02:39.096098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:11.890 [2024-07-26 14:02:39.096107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:11.890 [2024-07-26 14:02:39.096113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:11.890 [2024-07-26 14:02:39.096119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:11.890 [2024-07-26 14:02:39.096127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:11.890 [2024-07-26 14:02:39.096133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:11.890 [2024-07-26 14:02:39.096139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:11.890 [2024-07-26 14:02:39.096179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.890 [2024-07-26 14:02:39.096187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.890 [2024-07-26 14:02:39.096192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.890 [2024-07-26 14:02:39.096198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:11.890 [2024-07-26 14:02:39.096204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:11.890 [2024-07-26 14:02:39.096210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:11.890 [2024-07-26 14:02:39.096233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.150 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:12.150 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3022772 00:21:13.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3022772) - No such process 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.093 rmmod nvme_tcp 00:21:13.093 rmmod nvme_fabrics 00:21:13.093 rmmod nvme_keyring 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.093 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.642 00:21:15.642 real 0m8.023s 00:21:15.642 user 0m20.256s 00:21:15.642 sys 0m1.336s 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.642 ************************************ 00:21:15.642 END TEST nvmf_shutdown_tc3 00:21:15.642 ************************************ 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:15.642 00:21:15.642 real 0m31.573s 00:21:15.642 user 1m19.578s 00:21:15.642 sys 0m8.537s 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:15.642 ************************************ 00:21:15.642 END TEST nvmf_shutdown 00:21:15.642 ************************************ 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:15.642 00:21:15.642 real 10m37.793s 00:21:15.642 user 23m44.463s 00:21:15.642 sys 3m0.732s 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.642 14:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.642 ************************************ 00:21:15.642 END TEST nvmf_target_extra 00:21:15.642 ************************************ 00:21:15.642 14:02:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:15.642 14:02:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:15.642 14:02:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.642 14:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.642 ************************************ 00:21:15.642 START TEST nvmf_host 00:21:15.642 ************************************ 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:15.642 * Looking for test storage... 00:21:15.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.642 ************************************ 00:21:15.642 START TEST nvmf_multicontroller 00:21:15.642 ************************************ 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:15.642 * Looking for test storage... 00:21:15.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.642 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.643 14:02:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:22.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:22.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:22.232 Found net devices under 0000:86:00.0: cvl_0_0 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:22.232 Found net devices under 0000:86:00.1: cvl_0_1 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.232 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:21:22.232 00:21:22.232 --- 10.0.0.2 ping statistics --- 00:21:22.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.233 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:21:22.233 00:21:22.233 --- 10.0.0.1 ping statistics --- 00:21:22.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.233 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3026960 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3026960 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3026960 ']' 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:22.233 14:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.233 [2024-07-26 14:02:48.811804] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:22.233 [2024-07-26 14:02:48.811851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.233 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.233 [2024-07-26 14:02:48.869054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:22.233 [2024-07-26 14:02:48.954279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.233 [2024-07-26 14:02:48.954313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.233 [2024-07-26 14:02:48.954320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.233 [2024-07-26 14:02:48.954326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.233 [2024-07-26 14:02:48.954331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.233 [2024-07-26 14:02:48.954389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.233 [2024-07-26 14:02:48.954413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.233 [2024-07-26 14:02:48.954414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.233 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.233 [2024-07-26 14:02:49.655108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 Malloc0 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 [2024-07-26 14:02:49.723842] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 [2024-07-26 14:02:49.731766] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 Malloc1 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3027099 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3027099 /var/tmp/bdevperf.sock 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3027099 ']' 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.495 14:02:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.437 NVMe0n1 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.437 1 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.437 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.437 request: 00:21:23.437 { 00:21:23.437 "name": "NVMe0", 00:21:23.437 "trtype": "tcp", 00:21:23.437 "traddr": "10.0.0.2", 00:21:23.437 "adrfam": "ipv4", 00:21:23.437 "trsvcid": "4420", 00:21:23.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.437 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:23.437 "hostaddr": "10.0.0.2", 00:21:23.437 "hostsvcid": "60000", 00:21:23.437 "prchk_reftag": false, 00:21:23.437 "prchk_guard": false, 00:21:23.437 "hdgst": false, 00:21:23.437 "ddgst": false, 00:21:23.437 "method": "bdev_nvme_attach_controller", 00:21:23.437 "req_id": 1 00:21:23.437 } 00:21:23.437 Got JSON-RPC error response 00:21:23.437 response: 00:21:23.437 { 00:21:23.437 "code": -114, 00:21:23.438 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:23.438 } 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 request: 00:21:23.438 { 00:21:23.438 "name": "NVMe0", 00:21:23.438 "trtype": "tcp", 00:21:23.438 "traddr": "10.0.0.2", 00:21:23.438 "adrfam": "ipv4", 00:21:23.438 "trsvcid": "4420", 00:21:23.438 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.438 "hostaddr": "10.0.0.2", 00:21:23.438 "hostsvcid": "60000", 00:21:23.438 "prchk_reftag": false, 00:21:23.438 "prchk_guard": false, 00:21:23.438 "hdgst": false, 00:21:23.438 "ddgst": false, 00:21:23.438 "method": "bdev_nvme_attach_controller", 00:21:23.438 "req_id": 1 00:21:23.438 } 00:21:23.438 Got JSON-RPC error response 00:21:23.438 response: 00:21:23.438 { 00:21:23.438 "code": -114, 00:21:23.438 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:23.438 } 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 request: 00:21:23.438 { 00:21:23.438 "name": "NVMe0", 00:21:23.438 "trtype": "tcp", 00:21:23.438 "traddr": "10.0.0.2", 00:21:23.438 "adrfam": "ipv4", 00:21:23.438 "trsvcid": "4420", 00:21:23.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.438 "hostaddr": "10.0.0.2", 00:21:23.438 "hostsvcid": "60000", 00:21:23.438 "prchk_reftag": false, 00:21:23.438 "prchk_guard": false, 00:21:23.438 "hdgst": false, 00:21:23.438 "ddgst": false, 00:21:23.438 "multipath": "disable", 00:21:23.438 "method": "bdev_nvme_attach_controller", 00:21:23.438 "req_id": 1 00:21:23.438 } 00:21:23.438 Got JSON-RPC error response 00:21:23.438 response: 00:21:23.438 { 00:21:23.438 "code": -114, 00:21:23.438 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:23.438 } 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 request: 00:21:23.438 { 00:21:23.438 "name": "NVMe0", 00:21:23.438 "trtype": "tcp", 00:21:23.438 "traddr": "10.0.0.2", 00:21:23.438 "adrfam": "ipv4", 00:21:23.438 "trsvcid": "4420", 00:21:23.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.438 "hostaddr": "10.0.0.2", 00:21:23.438 "hostsvcid": "60000", 00:21:23.438 "prchk_reftag": false, 00:21:23.438 "prchk_guard": false, 00:21:23.438 "hdgst": false, 00:21:23.438 "ddgst": false, 00:21:23.438 "multipath": "failover", 00:21:23.438 "method": "bdev_nvme_attach_controller", 00:21:23.438 "req_id": 1 00:21:23.438 } 00:21:23.438 Got JSON-RPC error response 00:21:23.438 response: 00:21:23.438 { 00:21:23.438 "code": -114, 00:21:23.438 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:23.438 } 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.438 14:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.734 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.734 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.998 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.998 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:23.998 14:02:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.939 0 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3027099 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3027099 ']' 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3027099 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027099 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027099' 00:21:24.939 killing process with pid 3027099 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3027099 00:21:24.939 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3027099 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:25.199 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:25.199 [2024-07-26 14:02:49.834286] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:25.199 [2024-07-26 14:02:49.834334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027099 ] 00:21:25.199 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.199 [2024-07-26 14:02:49.888539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.199 [2024-07-26 14:02:49.963637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.199 [2024-07-26 14:02:51.141980] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 6ca5c8c2-3246-46bf-8b06-9d1b4e940b9b already exists 00:21:25.199 [2024-07-26 14:02:51.142009] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:6ca5c8c2-3246-46bf-8b06-9d1b4e940b9b alias for bdev NVMe1n1 00:21:25.199 [2024-07-26 14:02:51.142017] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:25.199 Running I/O for 1 seconds... 00:21:25.199 00:21:25.199 Latency(us) 00:21:25.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.199 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:25.199 NVMe0n1 : 1.01 23284.31 90.95 0.00 0.00 5479.35 2179.78 20173.69 00:21:25.199 =================================================================================================================== 00:21:25.199 Total : 23284.31 90.95 0.00 0.00 5479.35 2179.78 20173.69 00:21:25.199 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.199 00:21:25.199 Latency(us) 00:21:25.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.199 =================================================================================================================== 00:21:25.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.199 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.199 rmmod nvme_tcp 00:21:25.199 rmmod nvme_fabrics 00:21:25.199 rmmod nvme_keyring 00:21:25.199 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3026960 ']' 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3026960 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3026960 ']' 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3026960 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026960 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026960' 00:21:25.459 killing process with pid 3026960 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3026960 00:21:25.459 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3026960 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.719 14:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.642 14:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.642 00:21:27.642 real 0m12.108s 00:21:27.642 user 0m16.365s 00:21:27.642 sys 0m5.118s 00:21:27.642 14:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:27.642 14:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.642 ************************************ 00:21:27.642 END TEST nvmf_multicontroller 00:21:27.642 ************************************ 00:21:27.642 14:02:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:27.642 14:02:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:27.642 14:02:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:27.642 14:02:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.642 ************************************ 00:21:27.642 START TEST nvmf_aer 00:21:27.642 ************************************ 00:21:27.642 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:27.902 * Looking for test storage... 00:21:27.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.903 14:02:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.190 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.190 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.190 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.191 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.191 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:21:33.191 00:21:33.191 --- 10.0.0.2 ping statistics --- 00:21:33.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.191 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:21:33.191 00:21:33.191 --- 10.0.0.1 ping statistics --- 00:21:33.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.191 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3031129 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3031129 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3031129 ']' 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.191 14:03:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.451 [2024-07-26 14:03:00.637972] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:33.451 [2024-07-26 14:03:00.638023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.451 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.451 [2024-07-26 14:03:00.696426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.451 [2024-07-26 14:03:00.782996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.451 [2024-07-26 14:03:00.783031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.452 [2024-07-26 14:03:00.783038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.452 [2024-07-26 14:03:00.783048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.452 [2024-07-26 14:03:00.783053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.452 [2024-07-26 14:03:00.783093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.452 [2024-07-26 14:03:00.783176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.452 [2024-07-26 14:03:00.783284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.452 [2024-07-26 14:03:00.783286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.024 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.024 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:21:34.024 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.024 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.024 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 [2024-07-26 14:03:01.488321] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 Malloc0 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 [2024-07-26 14:03:01.532101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.284 [ 00:21:34.284 { 00:21:34.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:34.284 "subtype": "Discovery", 00:21:34.284 "listen_addresses": [], 00:21:34.284 "allow_any_host": true, 00:21:34.284 "hosts": [] 00:21:34.284 }, 00:21:34.284 { 00:21:34.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.284 "subtype": "NVMe", 00:21:34.284 "listen_addresses": [ 00:21:34.284 { 00:21:34.284 "trtype": "TCP", 00:21:34.284 "adrfam": "IPv4", 00:21:34.284 "traddr": "10.0.0.2", 00:21:34.284 "trsvcid": "4420" 00:21:34.284 } 00:21:34.284 ], 00:21:34.284 "allow_any_host": true, 00:21:34.284 "hosts": [], 00:21:34.284 "serial_number": "SPDK00000000000001", 00:21:34.284 "model_number": "SPDK bdev Controller", 00:21:34.284 "max_namespaces": 2, 00:21:34.284 "min_cntlid": 1, 00:21:34.284 "max_cntlid": 65519, 00:21:34.284 "namespaces": [ 00:21:34.284 { 00:21:34.284 "nsid": 1, 00:21:34.284 "bdev_name": "Malloc0", 00:21:34.284 "name": "Malloc0", 00:21:34.284 "nguid": "B0C52FC802B34A3A96FFE8CA0ED0EE95", 00:21:34.284 "uuid": "b0c52fc8-02b3-4a3a-96ff-e8ca0ed0ee95" 00:21:34.284 } 00:21:34.284 ] 00:21:34.284 } 00:21:34.284 ] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3031432 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:34.284 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:34.285 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.285 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.285 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:34.285 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:34.285 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.546 Malloc1 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.546 [ 00:21:34.546 { 00:21:34.546 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:34.546 "subtype": "Discovery", 00:21:34.546 "listen_addresses": [], 00:21:34.546 "allow_any_host": true, 00:21:34.546 "hosts": [] 00:21:34.546 }, 00:21:34.546 { 00:21:34.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.546 "subtype": "NVMe", 00:21:34.546 "listen_addresses": [ 00:21:34.546 { 00:21:34.546 "trtype": "TCP", 00:21:34.546 "adrfam": "IPv4", 00:21:34.546 "traddr": "10.0.0.2", 00:21:34.546 "trsvcid": "4420" 00:21:34.546 } 00:21:34.546 ], 00:21:34.546 "allow_any_host": true, 00:21:34.546 "hosts": [], 00:21:34.546 "serial_number": "SPDK00000000000001", 00:21:34.546 "model_number": "SPDK bdev Controller", 00:21:34.546 "max_namespaces": 2, 00:21:34.546 "min_cntlid": 1, 00:21:34.546 "max_cntlid": 65519, 00:21:34.546 "namespaces": [ 00:21:34.546 { 00:21:34.546 "nsid": 1, 00:21:34.546 "bdev_name": "Malloc0", 00:21:34.546 "name": "Malloc0", 00:21:34.546 "nguid": "B0C52FC802B34A3A96FFE8CA0ED0EE95", 00:21:34.546 "uuid": "b0c52fc8-02b3-4a3a-96ff-e8ca0ed0ee95" 00:21:34.546 }, 00:21:34.546 { 00:21:34.546 "nsid": 2, 00:21:34.546 "bdev_name": "Malloc1", 00:21:34.546 "name": "Malloc1", 00:21:34.546 "nguid": "6E98769883D94A92AE350AF7AE2A8F88", 00:21:34.546 "uuid": "6e987698-83d9-4a92-ae35-0af7ae2a8f88" 00:21:34.546 } 00:21:34.546 ] 00:21:34.546 } 00:21:34.546 ] 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3031432 00:21:34.546 Asynchronous Event Request test 00:21:34.546 Attaching to 10.0.0.2 00:21:34.546 Attached to 10.0.0.2 00:21:34.546 Registering asynchronous event callbacks... 00:21:34.546 Starting namespace attribute notice tests for all controllers... 00:21:34.546 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:34.546 aer_cb - Changed Namespace 00:21:34.546 Cleaning up... 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.546 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.807 14:03:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.807 rmmod nvme_tcp 00:21:34.807 rmmod nvme_fabrics 00:21:34.807 rmmod nvme_keyring 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3031129 ']' 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3031129 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3031129 ']' 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3031129 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3031129 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3031129' 00:21:34.807 killing process with pid 3031129 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3031129 00:21:34.807 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3031129 00:21:35.067 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.067 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.067 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.067 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.068 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.068 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.068 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.068 14:03:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.978 14:03:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.978 00:21:36.978 real 0m9.300s 00:21:36.978 user 0m7.487s 00:21:36.978 sys 0m4.540s 00:21:36.978 14:03:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:36.978 14:03:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:36.978 ************************************ 00:21:36.978 END TEST nvmf_aer 00:21:36.978 ************************************ 00:21:36.978 14:03:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:36.978 14:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:36.978 14:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:36.979 14:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.239 ************************************ 00:21:37.239 START TEST nvmf_async_init 00:21:37.239 ************************************ 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:37.239 * Looking for test storage... 00:21:37.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.239 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d5a6705d13264d839fa11a1fab1b1d4c 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.240 14:03:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.523 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.523 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.523 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:21:42.524 00:21:42.524 --- 10.0.0.2 ping statistics --- 00:21:42.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.524 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:21:42.524 00:21:42.524 --- 10.0.0.1 ping statistics --- 00:21:42.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.524 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3035212 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3035212 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3035212 ']' 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.524 14:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.524 [2024-07-26 14:03:09.950197] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:42.524 [2024-07-26 14:03:09.950243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.784 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.784 [2024-07-26 14:03:10.006600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.784 [2024-07-26 14:03:10.100376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.784 [2024-07-26 14:03:10.100413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.784 [2024-07-26 14:03:10.100420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.784 [2024-07-26 14:03:10.100427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.784 [2024-07-26 14:03:10.100432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.784 [2024-07-26 14:03:10.100453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.354 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.354 [2024-07-26 14:03:10.788693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.614 null0 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d5a6705d13264d839fa11a1fab1b1d4c 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.614 [2024-07-26 14:03:10.828873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.614 14:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 nvme0n1 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 [ 00:21:43.875 { 00:21:43.875 "name": "nvme0n1", 00:21:43.875 "aliases": [ 00:21:43.875 "d5a6705d-1326-4d83-9fa1-1a1fab1b1d4c" 00:21:43.875 ], 00:21:43.875 "product_name": "NVMe disk", 00:21:43.875 "block_size": 512, 00:21:43.875 "num_blocks": 2097152, 00:21:43.875 "uuid": "d5a6705d-1326-4d83-9fa1-1a1fab1b1d4c", 00:21:43.875 "assigned_rate_limits": { 00:21:43.875 "rw_ios_per_sec": 0, 00:21:43.875 "rw_mbytes_per_sec": 0, 00:21:43.875 "r_mbytes_per_sec": 0, 00:21:43.875 "w_mbytes_per_sec": 0 00:21:43.875 }, 00:21:43.875 "claimed": false, 00:21:43.875 "zoned": false, 00:21:43.875 "supported_io_types": { 00:21:43.875 "read": true, 00:21:43.875 "write": true, 00:21:43.875 "unmap": false, 00:21:43.875 "flush": true, 00:21:43.875 "reset": true, 00:21:43.875 "nvme_admin": true, 00:21:43.875 "nvme_io": true, 00:21:43.875 "nvme_io_md": false, 00:21:43.875 "write_zeroes": true, 00:21:43.875 "zcopy": false, 00:21:43.875 "get_zone_info": false, 00:21:43.875 "zone_management": false, 00:21:43.875 "zone_append": false, 00:21:43.875 "compare": true, 00:21:43.875 "compare_and_write": true, 00:21:43.875 "abort": true, 00:21:43.875 "seek_hole": false, 00:21:43.875 "seek_data": false, 00:21:43.875 "copy": true, 00:21:43.875 "nvme_iov_md": false 00:21:43.875 }, 00:21:43.875 "memory_domains": [ 00:21:43.875 { 00:21:43.875 "dma_device_id": "system", 00:21:43.875 "dma_device_type": 1 00:21:43.875 } 00:21:43.875 ], 00:21:43.875 "driver_specific": { 00:21:43.875 "nvme": [ 00:21:43.875 { 00:21:43.875 "trid": { 00:21:43.875 "trtype": "TCP", 00:21:43.875 "adrfam": "IPv4", 00:21:43.875 "traddr": "10.0.0.2", 00:21:43.875 "trsvcid": "4420", 00:21:43.875 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:43.875 }, 00:21:43.875 "ctrlr_data": { 00:21:43.875 "cntlid": 1, 00:21:43.875 "vendor_id": "0x8086", 00:21:43.875 "model_number": "SPDK bdev Controller", 00:21:43.875 "serial_number": "00000000000000000000", 00:21:43.875 "firmware_revision": "24.09", 00:21:43.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:43.875 "oacs": { 00:21:43.875 "security": 0, 00:21:43.875 "format": 0, 00:21:43.875 "firmware": 0, 00:21:43.875 "ns_manage": 0 00:21:43.875 }, 00:21:43.875 "multi_ctrlr": true, 00:21:43.875 "ana_reporting": false 00:21:43.875 }, 00:21:43.875 "vs": { 00:21:43.875 "nvme_version": "1.3" 00:21:43.875 }, 00:21:43.875 "ns_data": { 00:21:43.875 "id": 1, 00:21:43.875 "can_share": true 00:21:43.875 } 00:21:43.875 } 00:21:43.875 ], 00:21:43.875 "mp_policy": "active_passive" 00:21:43.875 } 00:21:43.875 } 00:21:43.875 ] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 [2024-07-26 14:03:11.085446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:43.875 [2024-07-26 14:03:11.085499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2b390 (9): Bad file descriptor 00:21:43.875 [2024-07-26 14:03:11.217138] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 [ 00:21:43.875 { 00:21:43.875 "name": "nvme0n1", 00:21:43.875 "aliases": [ 00:21:43.875 "d5a6705d-1326-4d83-9fa1-1a1fab1b1d4c" 00:21:43.875 ], 00:21:43.875 "product_name": "NVMe disk", 00:21:43.875 "block_size": 512, 00:21:43.875 "num_blocks": 2097152, 00:21:43.875 "uuid": "d5a6705d-1326-4d83-9fa1-1a1fab1b1d4c", 00:21:43.875 "assigned_rate_limits": { 00:21:43.875 "rw_ios_per_sec": 0, 00:21:43.875 "rw_mbytes_per_sec": 0, 00:21:43.875 "r_mbytes_per_sec": 0, 00:21:43.875 "w_mbytes_per_sec": 0 00:21:43.875 }, 00:21:43.875 "claimed": false, 00:21:43.875 "zoned": false, 00:21:43.875 "supported_io_types": { 00:21:43.875 "read": true, 00:21:43.875 "write": true, 00:21:43.875 "unmap": false, 00:21:43.875 "flush": true, 00:21:43.875 "reset": true, 00:21:43.875 "nvme_admin": true, 00:21:43.875 "nvme_io": true, 00:21:43.875 "nvme_io_md": false, 00:21:43.875 "write_zeroes": true, 00:21:43.875 "zcopy": false, 00:21:43.875 "get_zone_info": false, 00:21:43.875 "zone_management": false, 00:21:43.875 "zone_append": false, 00:21:43.875 "compare": true, 00:21:43.875 "compare_and_write": true, 00:21:43.875 "abort": true, 00:21:43.875 "seek_hole": false, 00:21:43.875 "seek_data": false, 00:21:43.875 "copy": true, 00:21:43.875 "nvme_iov_md": false 00:21:43.875 }, 00:21:43.875 "memory_domains": [ 00:21:43.875 { 00:21:43.875 "dma_device_id": "system", 00:21:43.875 "dma_device_type": 1 00:21:43.875 } 00:21:43.875 ], 00:21:43.875 "driver_specific": { 00:21:43.875 "nvme": [ 00:21:43.875 { 00:21:43.875 "trid": { 00:21:43.875 "trtype": "TCP", 00:21:43.875 "adrfam": "IPv4", 00:21:43.875 "traddr": "10.0.0.2", 00:21:43.875 "trsvcid": "4420", 00:21:43.875 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:43.875 }, 00:21:43.875 "ctrlr_data": { 00:21:43.875 "cntlid": 2, 00:21:43.875 "vendor_id": "0x8086", 00:21:43.875 "model_number": "SPDK bdev Controller", 00:21:43.875 "serial_number": "00000000000000000000", 00:21:43.875 "firmware_revision": "24.09", 00:21:43.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:43.875 "oacs": { 00:21:43.875 "security": 0, 00:21:43.875 "format": 0, 00:21:43.875 "firmware": 0, 00:21:43.875 "ns_manage": 0 00:21:43.875 }, 00:21:43.875 "multi_ctrlr": true, 00:21:43.875 "ana_reporting": false 00:21:43.875 }, 00:21:43.875 "vs": { 00:21:43.875 "nvme_version": "1.3" 00:21:43.875 }, 00:21:43.875 "ns_data": { 00:21:43.875 "id": 1, 00:21:43.875 "can_share": true 00:21:43.875 } 00:21:43.875 } 00:21:43.875 ], 00:21:43.875 "mp_policy": "active_passive" 00:21:43.875 } 00:21:43.875 } 00:21:43.875 ] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.96FPbPENcW 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.96FPbPENcW 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 [2024-07-26 14:03:11.278024] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.875 [2024-07-26 14:03:11.278119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.96FPbPENcW 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 [2024-07-26 14:03:11.286039] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.96FPbPENcW 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.875 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.875 [2024-07-26 14:03:11.294079] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.875 [2024-07-26 14:03:11.294112] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.135 nvme0n1 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.135 [ 00:21:44.135 { 00:21:44.135 "name": "nvme0n1", 00:21:44.135 "aliases": [ 00:21:44.135 "d5a6705d-1326-4d83-9fa1-1a1fab1b1d4c" 00:21:44.135 ], 00:21:44.135 "product_name": "NVMe disk", 00:21:44.135 "block_size": 512, 00:21:44.135 "num_blocks": 2097152, 00:21:44.135 "uuid": "d5a6705d-1326-4d83-9fa1-1a1fab1b1d4c", 00:21:44.135 "assigned_rate_limits": { 00:21:44.135 "rw_ios_per_sec": 0, 00:21:44.135 "rw_mbytes_per_sec": 0, 00:21:44.135 "r_mbytes_per_sec": 0, 00:21:44.135 "w_mbytes_per_sec": 0 00:21:44.135 }, 00:21:44.135 "claimed": false, 00:21:44.135 "zoned": false, 00:21:44.135 "supported_io_types": { 00:21:44.135 "read": true, 00:21:44.135 "write": true, 00:21:44.135 "unmap": false, 00:21:44.135 "flush": true, 00:21:44.135 "reset": true, 00:21:44.135 "nvme_admin": true, 00:21:44.135 "nvme_io": true, 00:21:44.135 "nvme_io_md": false, 00:21:44.135 "write_zeroes": true, 00:21:44.135 "zcopy": false, 00:21:44.135 "get_zone_info": false, 00:21:44.135 "zone_management": false, 00:21:44.135 "zone_append": false, 00:21:44.135 "compare": true, 00:21:44.135 "compare_and_write": true, 00:21:44.135 "abort": true, 00:21:44.135 "seek_hole": false, 00:21:44.135 "seek_data": false, 00:21:44.135 "copy": true, 00:21:44.135 "nvme_iov_md": false 00:21:44.135 }, 00:21:44.135 "memory_domains": [ 00:21:44.135 { 00:21:44.135 "dma_device_id": "system", 00:21:44.135 "dma_device_type": 1 00:21:44.135 } 00:21:44.135 ], 00:21:44.135 "driver_specific": { 00:21:44.135 "nvme": [ 00:21:44.135 { 00:21:44.135 "trid": { 00:21:44.135 "trtype": "TCP", 00:21:44.135 "adrfam": "IPv4", 00:21:44.135 "traddr": "10.0.0.2", 00:21:44.135 "trsvcid": "4421", 00:21:44.135 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:44.135 }, 00:21:44.135 "ctrlr_data": { 00:21:44.135 "cntlid": 3, 00:21:44.135 "vendor_id": "0x8086", 00:21:44.135 "model_number": "SPDK bdev Controller", 00:21:44.135 "serial_number": "00000000000000000000", 00:21:44.135 "firmware_revision": "24.09", 00:21:44.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.135 "oacs": { 00:21:44.135 "security": 0, 00:21:44.135 "format": 0, 00:21:44.135 "firmware": 0, 00:21:44.135 "ns_manage": 0 00:21:44.135 }, 00:21:44.135 "multi_ctrlr": true, 00:21:44.135 "ana_reporting": false 00:21:44.135 }, 00:21:44.135 "vs": { 00:21:44.135 "nvme_version": "1.3" 00:21:44.135 }, 00:21:44.135 "ns_data": { 00:21:44.135 "id": 1, 00:21:44.135 "can_share": true 00:21:44.135 } 00:21:44.135 } 00:21:44.135 ], 00:21:44.135 "mp_policy": "active_passive" 00:21:44.135 } 00:21:44.135 } 00:21:44.135 ] 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.96FPbPENcW 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:44.135 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.136 rmmod nvme_tcp 00:21:44.136 rmmod nvme_fabrics 00:21:44.136 rmmod nvme_keyring 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3035212 ']' 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3035212 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3035212 ']' 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3035212 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3035212 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3035212' 00:21:44.136 killing process with pid 3035212 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3035212 00:21:44.136 [2024-07-26 14:03:11.498696] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.136 [2024-07-26 14:03:11.498719] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.136 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3035212 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.396 14:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.306 14:03:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.306 00:21:46.306 real 0m9.317s 00:21:46.306 user 0m3.462s 00:21:46.306 sys 0m4.384s 00:21:46.306 14:03:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.306 14:03:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:46.306 ************************************ 00:21:46.306 END TEST nvmf_async_init 00:21:46.306 ************************************ 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.567 ************************************ 00:21:46.567 START TEST dma 00:21:46.567 ************************************ 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:46.567 * Looking for test storage... 00:21:46.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.567 14:03:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:46.568 00:21:46.568 real 0m0.095s 00:21:46.568 user 0m0.033s 00:21:46.568 sys 0m0.069s 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:46.568 ************************************ 00:21:46.568 END TEST dma 00:21:46.568 ************************************ 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.568 ************************************ 00:21:46.568 START TEST nvmf_identify 00:21:46.568 ************************************ 00:21:46.568 14:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:46.828 * Looking for test storage... 00:21:46.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.828 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.829 14:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.163 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.163 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.163 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.163 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.163 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:21:52.164 00:21:52.164 --- 10.0.0.2 ping statistics --- 00:21:52.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.164 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.452 ms 00:21:52.164 00:21:52.164 --- 10.0.0.1 ping statistics --- 00:21:52.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.164 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.164 14:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3038962 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3038962 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3038962 ']' 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.164 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:52.164 [2024-07-26 14:03:19.051473] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:52.164 [2024-07-26 14:03:19.051515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.164 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.164 [2024-07-26 14:03:19.109643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.164 [2024-07-26 14:03:19.191417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.164 [2024-07-26 14:03:19.191452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.164 [2024-07-26 14:03:19.191460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.164 [2024-07-26 14:03:19.191466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.164 [2024-07-26 14:03:19.191471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.164 [2024-07-26 14:03:19.191507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.164 [2024-07-26 14:03:19.191523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.164 [2024-07-26 14:03:19.191612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.164 [2024-07-26 14:03:19.191613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 [2024-07-26 14:03:19.878165] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 Malloc0 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 [2024-07-26 14:03:19.965889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 [ 00:21:52.736 { 00:21:52.736 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.736 "subtype": "Discovery", 00:21:52.736 "listen_addresses": [ 00:21:52.736 { 00:21:52.736 "trtype": "TCP", 00:21:52.736 "adrfam": "IPv4", 00:21:52.736 "traddr": "10.0.0.2", 00:21:52.736 "trsvcid": "4420" 00:21:52.736 } 00:21:52.736 ], 00:21:52.736 "allow_any_host": true, 00:21:52.736 "hosts": [] 00:21:52.736 }, 00:21:52.736 { 00:21:52.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.736 "subtype": "NVMe", 00:21:52.736 "listen_addresses": [ 00:21:52.736 { 00:21:52.736 "trtype": "TCP", 00:21:52.736 "adrfam": "IPv4", 00:21:52.736 "traddr": "10.0.0.2", 00:21:52.736 "trsvcid": "4420" 00:21:52.736 } 00:21:52.736 ], 00:21:52.736 "allow_any_host": true, 00:21:52.736 "hosts": [], 00:21:52.736 "serial_number": "SPDK00000000000001", 00:21:52.736 "model_number": "SPDK bdev Controller", 00:21:52.736 "max_namespaces": 32, 00:21:52.736 "min_cntlid": 1, 00:21:52.736 "max_cntlid": 65519, 00:21:52.736 "namespaces": [ 00:21:52.736 { 00:21:52.736 "nsid": 1, 00:21:52.736 "bdev_name": "Malloc0", 00:21:52.736 "name": "Malloc0", 00:21:52.736 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:52.736 "eui64": "ABCDEF0123456789", 00:21:52.736 "uuid": "0e8f332e-e6d9-41cb-b34f-dcf5f2adb34b" 00:21:52.736 } 00:21:52.736 ] 00:21:52.736 } 00:21:52.736 ] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 14:03:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:52.736 [2024-07-26 14:03:20.017058] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:52.736 [2024-07-26 14:03:20.017099] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039210 ] 00:21:52.736 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.736 [2024-07-26 14:03:20.048815] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:52.736 [2024-07-26 14:03:20.048864] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:52.736 [2024-07-26 14:03:20.048869] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:52.736 [2024-07-26 14:03:20.048881] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:52.736 [2024-07-26 14:03:20.048890] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:52.736 [2024-07-26 14:03:20.049608] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:52.736 [2024-07-26 14:03:20.049635] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d07ec0 0 00:21:52.736 [2024-07-26 14:03:20.056054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:52.736 [2024-07-26 14:03:20.056067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:52.736 [2024-07-26 14:03:20.056073] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:52.736 [2024-07-26 14:03:20.056075] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:52.736 [2024-07-26 14:03:20.056113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.736 [2024-07-26 14:03:20.056119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.056123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.056136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:52.737 [2024-07-26 14:03:20.056162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.063053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.063062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.063065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.063080] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:52.737 [2024-07-26 14:03:20.063086] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:52.737 [2024-07-26 14:03:20.063091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:52.737 [2024-07-26 14:03:20.063103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.063118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.063131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.063486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.063502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.063505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.063519] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:52.737 [2024-07-26 14:03:20.063527] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:52.737 [2024-07-26 14:03:20.063536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.063551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.063565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.063750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.063760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.063763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.063772] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:52.737 [2024-07-26 14:03:20.063780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:52.737 [2024-07-26 14:03:20.063788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.063802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.063817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.063980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.063990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.063993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.063996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.064002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:52.737 [2024-07-26 14:03:20.064013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.064028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.064040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.064206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.064215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.064219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.064227] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:52.737 [2024-07-26 14:03:20.064232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:52.737 [2024-07-26 14:03:20.064240] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:52.737 [2024-07-26 14:03:20.064345] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:52.737 [2024-07-26 14:03:20.064349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:52.737 [2024-07-26 14:03:20.064358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.064371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.064384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.064544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.064554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.064557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.064566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:52.737 [2024-07-26 14:03:20.064576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.064589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.064605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.064765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.064774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.064777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.064785] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:52.737 [2024-07-26 14:03:20.064790] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:52.737 [2024-07-26 14:03:20.064798] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:52.737 [2024-07-26 14:03:20.064806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:52.737 [2024-07-26 14:03:20.064816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.064820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.737 [2024-07-26 14:03:20.064826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.737 [2024-07-26 14:03:20.064839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.737 [2024-07-26 14:03:20.065134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.737 [2024-07-26 14:03:20.065146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.737 [2024-07-26 14:03:20.065149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.065152] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d07ec0): datao=0, datal=4096, cccid=0 00:21:52.737 [2024-07-26 14:03:20.065157] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8ae40) on tqpair(0x1d07ec0): expected_datao=0, payload_size=4096 00:21:52.737 [2024-07-26 14:03:20.065161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.065168] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.065171] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.065483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.737 [2024-07-26 14:03:20.065488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.737 [2024-07-26 14:03:20.065491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.737 [2024-07-26 14:03:20.065494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.737 [2024-07-26 14:03:20.065501] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:52.737 [2024-07-26 14:03:20.065505] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:52.738 [2024-07-26 14:03:20.065509] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:52.738 [2024-07-26 14:03:20.065513] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:52.738 [2024-07-26 14:03:20.065517] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:52.738 [2024-07-26 14:03:20.065521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:52.738 [2024-07-26 14:03:20.065530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:52.738 [2024-07-26 14:03:20.065542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.065556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.738 [2024-07-26 14:03:20.065568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.738 [2024-07-26 14:03:20.065740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.738 [2024-07-26 14:03:20.065749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.738 [2024-07-26 14:03:20.065752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:52.738 [2024-07-26 14:03:20.065764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.065776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.738 [2024-07-26 14:03:20.065782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.065793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.738 [2024-07-26 14:03:20.065797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.065808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.738 [2024-07-26 14:03:20.065813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.065824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.738 [2024-07-26 14:03:20.065829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:52.738 [2024-07-26 14:03:20.065841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:52.738 [2024-07-26 14:03:20.065847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.065850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.065856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.738 [2024-07-26 14:03:20.065869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ae40, cid 0, qid 0 00:21:52.738 [2024-07-26 14:03:20.065873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8afc0, cid 1, qid 0 00:21:52.738 [2024-07-26 14:03:20.065877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b140, cid 2, qid 0 00:21:52.738 [2024-07-26 14:03:20.065884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b2c0, cid 3, qid 0 00:21:52.738 [2024-07-26 14:03:20.065888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b440, cid 4, qid 0 00:21:52.738 [2024-07-26 14:03:20.066096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.738 [2024-07-26 14:03:20.066106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.738 [2024-07-26 14:03:20.066110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.066113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b440) on tqpair=0x1d07ec0 00:21:52.738 [2024-07-26 14:03:20.066118] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:52.738 [2024-07-26 14:03:20.066123] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:52.738 [2024-07-26 14:03:20.066135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.066139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.066146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.738 [2024-07-26 14:03:20.066159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b440, cid 4, qid 0 00:21:52.738 [2024-07-26 14:03:20.066332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.738 [2024-07-26 14:03:20.066341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.738 [2024-07-26 14:03:20.066344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.066348] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d07ec0): datao=0, datal=4096, cccid=4 00:21:52.738 [2024-07-26 14:03:20.066352] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8b440) on tqpair(0x1d07ec0): expected_datao=0, payload_size=4096 00:21:52.738 [2024-07-26 14:03:20.066356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.066647] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.066650] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.738 [2024-07-26 14:03:20.110062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.738 [2024-07-26 14:03:20.110065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b440) on tqpair=0x1d07ec0 00:21:52.738 [2024-07-26 14:03:20.110082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:52.738 [2024-07-26 14:03:20.110106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.110117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.738 [2024-07-26 14:03:20.110124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.110135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.738 [2024-07-26 14:03:20.110150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b440, cid 4, qid 0 00:21:52.738 [2024-07-26 14:03:20.110155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b5c0, cid 5, qid 0 00:21:52.738 [2024-07-26 14:03:20.110439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.738 [2024-07-26 14:03:20.110449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.738 [2024-07-26 14:03:20.110452] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110456] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d07ec0): datao=0, datal=1024, cccid=4 00:21:52.738 [2024-07-26 14:03:20.110460] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8b440) on tqpair(0x1d07ec0): expected_datao=0, payload_size=1024 00:21:52.738 [2024-07-26 14:03:20.110464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110470] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110474] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.738 [2024-07-26 14:03:20.110484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.738 [2024-07-26 14:03:20.110487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.110491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b5c0) on tqpair=0x1d07ec0 00:21:52.738 [2024-07-26 14:03:20.151287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.738 [2024-07-26 14:03:20.151301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.738 [2024-07-26 14:03:20.151304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.151307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b440) on tqpair=0x1d07ec0 00:21:52.738 [2024-07-26 14:03:20.151326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.151330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d07ec0) 00:21:52.738 [2024-07-26 14:03:20.151336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.738 [2024-07-26 14:03:20.151354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b440, cid 4, qid 0 00:21:52.738 [2024-07-26 14:03:20.151520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.738 [2024-07-26 14:03:20.151531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.738 [2024-07-26 14:03:20.151534] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.151537] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d07ec0): datao=0, datal=3072, cccid=4 00:21:52.738 [2024-07-26 14:03:20.151541] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8b440) on tqpair(0x1d07ec0): expected_datao=0, payload_size=3072 00:21:52.738 [2024-07-26 14:03:20.151545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.151857] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.738 [2024-07-26 14:03:20.151860] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.739 [2024-07-26 14:03:20.151994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.739 [2024-07-26 14:03:20.152004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.739 [2024-07-26 14:03:20.152007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.739 [2024-07-26 14:03:20.152011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b440) on tqpair=0x1d07ec0 00:21:52.739 [2024-07-26 14:03:20.152020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.739 [2024-07-26 14:03:20.152023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d07ec0) 00:21:52.739 [2024-07-26 14:03:20.152030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.739 [2024-07-26 14:03:20.152055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b440, cid 4, qid 0 00:21:52.739 [2024-07-26 14:03:20.152452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.739 [2024-07-26 14:03:20.152461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.739 [2024-07-26 14:03:20.152464] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.739 [2024-07-26 14:03:20.152467] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d07ec0): datao=0, datal=8, cccid=4 00:21:52.739 [2024-07-26 14:03:20.152471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8b440) on tqpair(0x1d07ec0): expected_datao=0, payload_size=8 00:21:52.739 [2024-07-26 14:03:20.152474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.739 [2024-07-26 14:03:20.152480] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.739 [2024-07-26 14:03:20.152483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.003 [2024-07-26 14:03:20.193286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.003 [2024-07-26 14:03:20.193301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.003 [2024-07-26 14:03:20.193304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.003 [2024-07-26 14:03:20.193308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b440) on tqpair=0x1d07ec0 00:21:53.003 ===================================================== 00:21:53.003 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:53.003 ===================================================== 00:21:53.003 Controller Capabilities/Features 00:21:53.003 ================================ 00:21:53.003 Vendor ID: 0000 00:21:53.003 Subsystem Vendor ID: 0000 00:21:53.003 Serial Number: .................... 00:21:53.003 Model Number: ........................................ 00:21:53.003 Firmware Version: 24.09 00:21:53.003 Recommended Arb Burst: 0 00:21:53.003 IEEE OUI Identifier: 00 00 00 00:21:53.003 Multi-path I/O 00:21:53.003 May have multiple subsystem ports: No 00:21:53.003 May have multiple controllers: No 00:21:53.003 Associated with SR-IOV VF: No 00:21:53.003 Max Data Transfer Size: 131072 00:21:53.003 Max Number of Namespaces: 0 00:21:53.003 Max Number of I/O Queues: 1024 00:21:53.003 NVMe Specification Version (VS): 1.3 00:21:53.003 NVMe Specification Version (Identify): 1.3 00:21:53.003 Maximum Queue Entries: 128 00:21:53.004 Contiguous Queues Required: Yes 00:21:53.004 Arbitration Mechanisms Supported 00:21:53.004 Weighted Round Robin: Not Supported 00:21:53.004 Vendor Specific: Not Supported 00:21:53.004 Reset Timeout: 15000 ms 00:21:53.004 Doorbell Stride: 4 bytes 00:21:53.004 NVM Subsystem Reset: Not Supported 00:21:53.004 Command Sets Supported 00:21:53.004 NVM Command Set: Supported 00:21:53.004 Boot Partition: Not Supported 00:21:53.004 Memory Page Size Minimum: 4096 bytes 00:21:53.004 Memory Page Size Maximum: 4096 bytes 00:21:53.004 Persistent Memory Region: Not Supported 00:21:53.004 Optional Asynchronous Events Supported 00:21:53.004 Namespace Attribute Notices: Not Supported 00:21:53.004 Firmware Activation Notices: Not Supported 00:21:53.004 ANA Change Notices: Not Supported 00:21:53.004 PLE Aggregate Log Change Notices: Not Supported 00:21:53.004 LBA Status Info Alert Notices: Not Supported 00:21:53.004 EGE Aggregate Log Change Notices: Not Supported 00:21:53.004 Normal NVM Subsystem Shutdown event: Not Supported 00:21:53.004 Zone Descriptor Change Notices: Not Supported 00:21:53.004 Discovery Log Change Notices: Supported 00:21:53.004 Controller Attributes 00:21:53.004 128-bit Host Identifier: Not Supported 00:21:53.004 Non-Operational Permissive Mode: Not Supported 00:21:53.004 NVM Sets: Not Supported 00:21:53.004 Read Recovery Levels: Not Supported 00:21:53.004 Endurance Groups: Not Supported 00:21:53.004 Predictable Latency Mode: Not Supported 00:21:53.004 Traffic Based Keep ALive: Not Supported 00:21:53.004 Namespace Granularity: Not Supported 00:21:53.004 SQ Associations: Not Supported 00:21:53.004 UUID List: Not Supported 00:21:53.004 Multi-Domain Subsystem: Not Supported 00:21:53.004 Fixed Capacity Management: Not Supported 00:21:53.004 Variable Capacity Management: Not Supported 00:21:53.004 Delete Endurance Group: Not Supported 00:21:53.004 Delete NVM Set: Not Supported 00:21:53.004 Extended LBA Formats Supported: Not Supported 00:21:53.004 Flexible Data Placement Supported: Not Supported 00:21:53.004 00:21:53.004 Controller Memory Buffer Support 00:21:53.004 ================================ 00:21:53.004 Supported: No 00:21:53.004 00:21:53.004 Persistent Memory Region Support 00:21:53.004 ================================ 00:21:53.004 Supported: No 00:21:53.004 00:21:53.004 Admin Command Set Attributes 00:21:53.004 ============================ 00:21:53.004 Security Send/Receive: Not Supported 00:21:53.004 Format NVM: Not Supported 00:21:53.004 Firmware Activate/Download: Not Supported 00:21:53.004 Namespace Management: Not Supported 00:21:53.004 Device Self-Test: Not Supported 00:21:53.004 Directives: Not Supported 00:21:53.004 NVMe-MI: Not Supported 00:21:53.004 Virtualization Management: Not Supported 00:21:53.004 Doorbell Buffer Config: Not Supported 00:21:53.004 Get LBA Status Capability: Not Supported 00:21:53.004 Command & Feature Lockdown Capability: Not Supported 00:21:53.004 Abort Command Limit: 1 00:21:53.004 Async Event Request Limit: 4 00:21:53.004 Number of Firmware Slots: N/A 00:21:53.004 Firmware Slot 1 Read-Only: N/A 00:21:53.004 Firmware Activation Without Reset: N/A 00:21:53.004 Multiple Update Detection Support: N/A 00:21:53.004 Firmware Update Granularity: No Information Provided 00:21:53.004 Per-Namespace SMART Log: No 00:21:53.004 Asymmetric Namespace Access Log Page: Not Supported 00:21:53.004 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:53.004 Command Effects Log Page: Not Supported 00:21:53.004 Get Log Page Extended Data: Supported 00:21:53.004 Telemetry Log Pages: Not Supported 00:21:53.004 Persistent Event Log Pages: Not Supported 00:21:53.004 Supported Log Pages Log Page: May Support 00:21:53.004 Commands Supported & Effects Log Page: Not Supported 00:21:53.004 Feature Identifiers & Effects Log Page:May Support 00:21:53.004 NVMe-MI Commands & Effects Log Page: May Support 00:21:53.004 Data Area 4 for Telemetry Log: Not Supported 00:21:53.004 Error Log Page Entries Supported: 128 00:21:53.004 Keep Alive: Not Supported 00:21:53.004 00:21:53.004 NVM Command Set Attributes 00:21:53.004 ========================== 00:21:53.004 Submission Queue Entry Size 00:21:53.004 Max: 1 00:21:53.004 Min: 1 00:21:53.004 Completion Queue Entry Size 00:21:53.004 Max: 1 00:21:53.004 Min: 1 00:21:53.004 Number of Namespaces: 0 00:21:53.004 Compare Command: Not Supported 00:21:53.004 Write Uncorrectable Command: Not Supported 00:21:53.004 Dataset Management Command: Not Supported 00:21:53.004 Write Zeroes Command: Not Supported 00:21:53.004 Set Features Save Field: Not Supported 00:21:53.004 Reservations: Not Supported 00:21:53.004 Timestamp: Not Supported 00:21:53.004 Copy: Not Supported 00:21:53.004 Volatile Write Cache: Not Present 00:21:53.004 Atomic Write Unit (Normal): 1 00:21:53.004 Atomic Write Unit (PFail): 1 00:21:53.004 Atomic Compare & Write Unit: 1 00:21:53.004 Fused Compare & Write: Supported 00:21:53.004 Scatter-Gather List 00:21:53.004 SGL Command Set: Supported 00:21:53.004 SGL Keyed: Supported 00:21:53.004 SGL Bit Bucket Descriptor: Not Supported 00:21:53.004 SGL Metadata Pointer: Not Supported 00:21:53.004 Oversized SGL: Not Supported 00:21:53.004 SGL Metadata Address: Not Supported 00:21:53.004 SGL Offset: Supported 00:21:53.004 Transport SGL Data Block: Not Supported 00:21:53.004 Replay Protected Memory Block: Not Supported 00:21:53.004 00:21:53.004 Firmware Slot Information 00:21:53.004 ========================= 00:21:53.004 Active slot: 0 00:21:53.004 00:21:53.004 00:21:53.004 Error Log 00:21:53.004 ========= 00:21:53.004 00:21:53.004 Active Namespaces 00:21:53.004 ================= 00:21:53.004 Discovery Log Page 00:21:53.004 ================== 00:21:53.004 Generation Counter: 2 00:21:53.004 Number of Records: 2 00:21:53.004 Record Format: 0 00:21:53.004 00:21:53.004 Discovery Log Entry 0 00:21:53.004 ---------------------- 00:21:53.004 Transport Type: 3 (TCP) 00:21:53.004 Address Family: 1 (IPv4) 00:21:53.004 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:53.004 Entry Flags: 00:21:53.004 Duplicate Returned Information: 1 00:21:53.004 Explicit Persistent Connection Support for Discovery: 1 00:21:53.004 Transport Requirements: 00:21:53.004 Secure Channel: Not Required 00:21:53.004 Port ID: 0 (0x0000) 00:21:53.004 Controller ID: 65535 (0xffff) 00:21:53.004 Admin Max SQ Size: 128 00:21:53.005 Transport Service Identifier: 4420 00:21:53.005 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:53.005 Transport Address: 10.0.0.2 00:21:53.005 Discovery Log Entry 1 00:21:53.005 ---------------------- 00:21:53.005 Transport Type: 3 (TCP) 00:21:53.005 Address Family: 1 (IPv4) 00:21:53.005 Subsystem Type: 2 (NVM Subsystem) 00:21:53.005 Entry Flags: 00:21:53.005 Duplicate Returned Information: 0 00:21:53.005 Explicit Persistent Connection Support for Discovery: 0 00:21:53.005 Transport Requirements: 00:21:53.005 Secure Channel: Not Required 00:21:53.005 Port ID: 0 (0x0000) 00:21:53.005 Controller ID: 65535 (0xffff) 00:21:53.005 Admin Max SQ Size: 128 00:21:53.005 Transport Service Identifier: 4420 00:21:53.005 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:53.005 Transport Address: 10.0.0.2 [2024-07-26 14:03:20.193386] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:53.005 [2024-07-26 14:03:20.193396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8ae40) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.193403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.005 [2024-07-26 14:03:20.193407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8afc0) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.193411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.005 [2024-07-26 14:03:20.193415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b140) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.193419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.005 [2024-07-26 14:03:20.193424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b2c0) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.193427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.005 [2024-07-26 14:03:20.193437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d07ec0) 00:21:53.005 [2024-07-26 14:03:20.193450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.005 [2024-07-26 14:03:20.193464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b2c0, cid 3, qid 0 00:21:53.005 [2024-07-26 14:03:20.193624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.005 [2024-07-26 14:03:20.193634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.005 [2024-07-26 14:03:20.193637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b2c0) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.193648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d07ec0) 00:21:53.005 [2024-07-26 14:03:20.193661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.005 [2024-07-26 14:03:20.193677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b2c0, cid 3, qid 0 00:21:53.005 [2024-07-26 14:03:20.193848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.005 [2024-07-26 14:03:20.193858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.005 [2024-07-26 14:03:20.193861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b2c0) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.193869] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:53.005 [2024-07-26 14:03:20.193873] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:53.005 [2024-07-26 14:03:20.193883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.193890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d07ec0) 00:21:53.005 [2024-07-26 14:03:20.193897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.005 [2024-07-26 14:03:20.193909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b2c0, cid 3, qid 0 00:21:53.005 [2024-07-26 14:03:20.197700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.005 [2024-07-26 14:03:20.197713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.005 [2024-07-26 14:03:20.197717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.197720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b2c0) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.197733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.197737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.197740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d07ec0) 00:21:53.005 [2024-07-26 14:03:20.197747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.005 [2024-07-26 14:03:20.197760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8b2c0, cid 3, qid 0 00:21:53.005 [2024-07-26 14:03:20.198062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.005 [2024-07-26 14:03:20.198073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.005 [2024-07-26 14:03:20.198076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.198080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8b2c0) on tqpair=0x1d07ec0 00:21:53.005 [2024-07-26 14:03:20.198088] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:53.005 00:21:53.005 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:53.005 [2024-07-26 14:03:20.238653] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:21:53.005 [2024-07-26 14:03:20.238692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039215 ] 00:21:53.005 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.005 [2024-07-26 14:03:20.268461] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:53.005 [2024-07-26 14:03:20.268502] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:53.005 [2024-07-26 14:03:20.268509] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:53.005 [2024-07-26 14:03:20.268520] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:53.005 [2024-07-26 14:03:20.268527] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:53.005 [2024-07-26 14:03:20.269130] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:53.005 [2024-07-26 14:03:20.269151] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b7dec0 0 00:21:53.005 [2024-07-26 14:03:20.276052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:53.005 [2024-07-26 14:03:20.276073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:53.005 [2024-07-26 14:03:20.276077] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:53.005 [2024-07-26 14:03:20.276080] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:53.005 [2024-07-26 14:03:20.276113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.276118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.276121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.005 [2024-07-26 14:03:20.276132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:53.005 [2024-07-26 14:03:20.276147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.005 [2024-07-26 14:03:20.283053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.005 [2024-07-26 14:03:20.283061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.005 [2024-07-26 14:03:20.283064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.283067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.005 [2024-07-26 14:03:20.283077] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:53.005 [2024-07-26 14:03:20.283083] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:53.005 [2024-07-26 14:03:20.283088] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:53.005 [2024-07-26 14:03:20.283099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.283102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.283105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.005 [2024-07-26 14:03:20.283112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.005 [2024-07-26 14:03:20.283124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.005 [2024-07-26 14:03:20.283376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.005 [2024-07-26 14:03:20.283386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.005 [2024-07-26 14:03:20.283389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.005 [2024-07-26 14:03:20.283393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.005 [2024-07-26 14:03:20.283401] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:53.006 [2024-07-26 14:03:20.283409] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:53.006 [2024-07-26 14:03:20.283416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.283429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.006 [2024-07-26 14:03:20.283444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.283606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.283616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.283619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.283627] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:53.006 [2024-07-26 14:03:20.283636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:53.006 [2024-07-26 14:03:20.283642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.283655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.006 [2024-07-26 14:03:20.283666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.283820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.283830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.283833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.283841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:53.006 [2024-07-26 14:03:20.283852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.283858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.283865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.006 [2024-07-26 14:03:20.283877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.284034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.284050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.284054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.284061] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:53.006 [2024-07-26 14:03:20.284066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:53.006 [2024-07-26 14:03:20.284074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:53.006 [2024-07-26 14:03:20.284179] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:53.006 [2024-07-26 14:03:20.284183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:53.006 [2024-07-26 14:03:20.284190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.284206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.006 [2024-07-26 14:03:20.284219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.284608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.284613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.284616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.284623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:53.006 [2024-07-26 14:03:20.284631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.284643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.006 [2024-07-26 14:03:20.284653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.284815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.284824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.284827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.284835] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:53.006 [2024-07-26 14:03:20.284839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:53.006 [2024-07-26 14:03:20.284847] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:53.006 [2024-07-26 14:03:20.284855] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:53.006 [2024-07-26 14:03:20.284863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.284866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.284873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.006 [2024-07-26 14:03:20.284885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.285185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.006 [2024-07-26 14:03:20.285196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.006 [2024-07-26 14:03:20.285199] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285202] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=4096, cccid=0 00:21:53.006 [2024-07-26 14:03:20.285206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00e40) on tqpair(0x1b7dec0): expected_datao=0, payload_size=4096 00:21:53.006 [2024-07-26 14:03:20.285210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285217] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285220] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.285540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.285543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.285553] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:53.006 [2024-07-26 14:03:20.285557] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:53.006 [2024-07-26 14:03:20.285561] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:53.006 [2024-07-26 14:03:20.285565] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:53.006 [2024-07-26 14:03:20.285568] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:53.006 [2024-07-26 14:03:20.285572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:53.006 [2024-07-26 14:03:20.285580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:53.006 [2024-07-26 14:03:20.285589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.006 [2024-07-26 14:03:20.285603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.006 [2024-07-26 14:03:20.285615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.006 [2024-07-26 14:03:20.285779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.006 [2024-07-26 14:03:20.285788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.006 [2024-07-26 14:03:20.285791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.006 [2024-07-26 14:03:20.285802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.006 [2024-07-26 14:03:20.285808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.285814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.007 [2024-07-26 14:03:20.285819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.285830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.007 [2024-07-26 14:03:20.285835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.285846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.007 [2024-07-26 14:03:20.285851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.285865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.007 [2024-07-26 14:03:20.285869] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.285881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.285886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.285890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.285895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.007 [2024-07-26 14:03:20.285908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00e40, cid 0, qid 0 00:21:53.007 [2024-07-26 14:03:20.285913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00fc0, cid 1, qid 0 00:21:53.007 [2024-07-26 14:03:20.285917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01140, cid 2, qid 0 00:21:53.007 [2024-07-26 14:03:20.285921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.007 [2024-07-26 14:03:20.285925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.007 [2024-07-26 14:03:20.286124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.007 [2024-07-26 14:03:20.286134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.007 [2024-07-26 14:03:20.286137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.007 [2024-07-26 14:03:20.286145] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:53.007 [2024-07-26 14:03:20.286150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.286162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.286167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.286174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.286186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.007 [2024-07-26 14:03:20.286199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.007 [2024-07-26 14:03:20.286359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.007 [2024-07-26 14:03:20.286369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.007 [2024-07-26 14:03:20.286372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.007 [2024-07-26 14:03:20.286430] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.286440] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.286447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.286459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.007 [2024-07-26 14:03:20.286471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.007 [2024-07-26 14:03:20.286649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.007 [2024-07-26 14:03:20.286659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.007 [2024-07-26 14:03:20.286662] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286665] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=4096, cccid=4 00:21:53.007 [2024-07-26 14:03:20.286669] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01440) on tqpair(0x1b7dec0): expected_datao=0, payload_size=4096 00:21:53.007 [2024-07-26 14:03:20.286673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.286967] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.007 [2024-07-26 14:03:20.327290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.007 [2024-07-26 14:03:20.327293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.007 [2024-07-26 14:03:20.327306] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:53.007 [2024-07-26 14:03:20.327322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.327332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.327339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.327349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.007 [2024-07-26 14:03:20.327362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.007 [2024-07-26 14:03:20.327544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.007 [2024-07-26 14:03:20.327554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.007 [2024-07-26 14:03:20.327557] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327560] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=4096, cccid=4 00:21:53.007 [2024-07-26 14:03:20.327564] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01440) on tqpair(0x1b7dec0): expected_datao=0, payload_size=4096 00:21:53.007 [2024-07-26 14:03:20.327568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327574] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327578] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.007 [2024-07-26 14:03:20.327905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.007 [2024-07-26 14:03:20.327907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.007 [2024-07-26 14:03:20.327924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.327936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.327943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.327946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.007 [2024-07-26 14:03:20.327952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.007 [2024-07-26 14:03:20.327965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.007 [2024-07-26 14:03:20.328147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.007 [2024-07-26 14:03:20.328158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.007 [2024-07-26 14:03:20.328161] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.328164] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=4096, cccid=4 00:21:53.007 [2024-07-26 14:03:20.328168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01440) on tqpair(0x1b7dec0): expected_datao=0, payload_size=4096 00:21:53.007 [2024-07-26 14:03:20.328172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.328462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.328466] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.369297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.007 [2024-07-26 14:03:20.369310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.007 [2024-07-26 14:03:20.369313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.007 [2024-07-26 14:03:20.369316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.007 [2024-07-26 14:03:20.369324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:53.007 [2024-07-26 14:03:20.369333] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:53.008 [2024-07-26 14:03:20.369341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:53.008 [2024-07-26 14:03:20.369348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:53.008 [2024-07-26 14:03:20.369352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:53.008 [2024-07-26 14:03:20.369357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:53.008 [2024-07-26 14:03:20.369361] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:53.008 [2024-07-26 14:03:20.369365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:53.008 [2024-07-26 14:03:20.369369] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:53.008 [2024-07-26 14:03:20.369383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.369393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.369399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.369412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.008 [2024-07-26 14:03:20.369427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.008 [2024-07-26 14:03:20.369432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c015c0, cid 5, qid 0 00:21:53.008 [2024-07-26 14:03:20.369606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.008 [2024-07-26 14:03:20.369616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.008 [2024-07-26 14:03:20.369619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.008 [2024-07-26 14:03:20.369628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.008 [2024-07-26 14:03:20.369633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.008 [2024-07-26 14:03:20.369636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c015c0) on tqpair=0x1b7dec0 00:21:53.008 [2024-07-26 14:03:20.369648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.369658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.369670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c015c0, cid 5, qid 0 00:21:53.008 [2024-07-26 14:03:20.369836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.008 [2024-07-26 14:03:20.369846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.008 [2024-07-26 14:03:20.369849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c015c0) on tqpair=0x1b7dec0 00:21:53.008 [2024-07-26 14:03:20.369862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.369866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.369872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.369884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c015c0, cid 5, qid 0 00:21:53.008 [2024-07-26 14:03:20.370055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.008 [2024-07-26 14:03:20.370065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.008 [2024-07-26 14:03:20.370068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c015c0) on tqpair=0x1b7dec0 00:21:53.008 [2024-07-26 14:03:20.370082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.370091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.370104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c015c0, cid 5, qid 0 00:21:53.008 [2024-07-26 14:03:20.370261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.008 [2024-07-26 14:03:20.370271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.008 [2024-07-26 14:03:20.370274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c015c0) on tqpair=0x1b7dec0 00:21:53.008 [2024-07-26 14:03:20.370294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.370307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.370314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.370322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.370328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.370336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.370342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b7dec0) 00:21:53.008 [2024-07-26 14:03:20.370351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.008 [2024-07-26 14:03:20.370363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c015c0, cid 5, qid 0 00:21:53.008 [2024-07-26 14:03:20.370368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01440, cid 4, qid 0 00:21:53.008 [2024-07-26 14:03:20.370372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01740, cid 6, qid 0 00:21:53.008 [2024-07-26 14:03:20.370376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c018c0, cid 7, qid 0 00:21:53.008 [2024-07-26 14:03:20.370819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.008 [2024-07-26 14:03:20.370829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.008 [2024-07-26 14:03:20.370832] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370836] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=8192, cccid=5 00:21:53.008 [2024-07-26 14:03:20.370839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c015c0) on tqpair(0x1b7dec0): expected_datao=0, payload_size=8192 00:21:53.008 [2024-07-26 14:03:20.370843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370849] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370853] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.008 [2024-07-26 14:03:20.370862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.008 [2024-07-26 14:03:20.370865] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.008 [2024-07-26 14:03:20.370868] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=512, cccid=4 00:21:53.008 [2024-07-26 14:03:20.370872] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01440) on tqpair(0x1b7dec0): expected_datao=0, payload_size=512 00:21:53.009 [2024-07-26 14:03:20.370876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370881] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370884] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.009 [2024-07-26 14:03:20.370893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.009 [2024-07-26 14:03:20.370896] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370902] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=512, cccid=6 00:21:53.009 [2024-07-26 14:03:20.370905] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01740) on tqpair(0x1b7dec0): expected_datao=0, payload_size=512 00:21:53.009 [2024-07-26 14:03:20.370909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370914] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370917] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.009 [2024-07-26 14:03:20.370927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.009 [2024-07-26 14:03:20.370929] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370932] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7dec0): datao=0, datal=4096, cccid=7 00:21:53.009 [2024-07-26 14:03:20.370936] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c018c0) on tqpair(0x1b7dec0): expected_datao=0, payload_size=4096 00:21:53.009 [2024-07-26 14:03:20.370940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.370948] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.375052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.009 [2024-07-26 14:03:20.375059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.009 [2024-07-26 14:03:20.375062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.375065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c015c0) on tqpair=0x1b7dec0 00:21:53.009 [2024-07-26 14:03:20.375076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.009 [2024-07-26 14:03:20.375082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.009 [2024-07-26 14:03:20.375084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.375088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01440) on tqpair=0x1b7dec0 00:21:53.009 [2024-07-26 14:03:20.375097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.009 [2024-07-26 14:03:20.375101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.009 [2024-07-26 14:03:20.375104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.375108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01740) on tqpair=0x1b7dec0 00:21:53.009 [2024-07-26 14:03:20.375113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.009 [2024-07-26 14:03:20.375118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.009 [2024-07-26 14:03:20.375121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.009 [2024-07-26 14:03:20.375124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c018c0) on tqpair=0x1b7dec0 00:21:53.009 ===================================================== 00:21:53.009 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.009 ===================================================== 00:21:53.009 Controller Capabilities/Features 00:21:53.009 ================================ 00:21:53.009 Vendor ID: 8086 00:21:53.009 Subsystem Vendor ID: 8086 00:21:53.009 Serial Number: SPDK00000000000001 00:21:53.009 Model Number: SPDK bdev Controller 00:21:53.009 Firmware Version: 24.09 00:21:53.009 Recommended Arb Burst: 6 00:21:53.009 IEEE OUI Identifier: e4 d2 5c 00:21:53.009 Multi-path I/O 00:21:53.009 May have multiple subsystem ports: Yes 00:21:53.009 May have multiple controllers: Yes 00:21:53.009 Associated with SR-IOV VF: No 00:21:53.009 Max Data Transfer Size: 131072 00:21:53.009 Max Number of Namespaces: 32 00:21:53.009 Max Number of I/O Queues: 127 00:21:53.009 NVMe Specification Version (VS): 1.3 00:21:53.009 NVMe Specification Version (Identify): 1.3 00:21:53.009 Maximum Queue Entries: 128 00:21:53.009 Contiguous Queues Required: Yes 00:21:53.009 Arbitration Mechanisms Supported 00:21:53.009 Weighted Round Robin: Not Supported 00:21:53.009 Vendor Specific: Not Supported 00:21:53.009 Reset Timeout: 15000 ms 00:21:53.009 Doorbell Stride: 4 bytes 00:21:53.009 NVM Subsystem Reset: Not Supported 00:21:53.009 Command Sets Supported 00:21:53.009 NVM Command Set: Supported 00:21:53.009 Boot Partition: Not Supported 00:21:53.009 Memory Page Size Minimum: 4096 bytes 00:21:53.009 Memory Page Size Maximum: 4096 bytes 00:21:53.009 Persistent Memory Region: Not Supported 00:21:53.009 Optional Asynchronous Events Supported 00:21:53.009 Namespace Attribute Notices: Supported 00:21:53.009 Firmware Activation Notices: Not Supported 00:21:53.009 ANA Change Notices: Not Supported 00:21:53.009 PLE Aggregate Log Change Notices: Not Supported 00:21:53.009 LBA Status Info Alert Notices: Not Supported 00:21:53.009 EGE Aggregate Log Change Notices: Not Supported 00:21:53.009 Normal NVM Subsystem Shutdown event: Not Supported 00:21:53.009 Zone Descriptor Change Notices: Not Supported 00:21:53.009 Discovery Log Change Notices: Not Supported 00:21:53.009 Controller Attributes 00:21:53.009 128-bit Host Identifier: Supported 00:21:53.009 Non-Operational Permissive Mode: Not Supported 00:21:53.009 NVM Sets: Not Supported 00:21:53.009 Read Recovery Levels: Not Supported 00:21:53.009 Endurance Groups: Not Supported 00:21:53.009 Predictable Latency Mode: Not Supported 00:21:53.009 Traffic Based Keep ALive: Not Supported 00:21:53.009 Namespace Granularity: Not Supported 00:21:53.009 SQ Associations: Not Supported 00:21:53.009 UUID List: Not Supported 00:21:53.009 Multi-Domain Subsystem: Not Supported 00:21:53.009 Fixed Capacity Management: Not Supported 00:21:53.009 Variable Capacity Management: Not Supported 00:21:53.009 Delete Endurance Group: Not Supported 00:21:53.009 Delete NVM Set: Not Supported 00:21:53.009 Extended LBA Formats Supported: Not Supported 00:21:53.009 Flexible Data Placement Supported: Not Supported 00:21:53.009 00:21:53.009 Controller Memory Buffer Support 00:21:53.009 ================================ 00:21:53.009 Supported: No 00:21:53.009 00:21:53.009 Persistent Memory Region Support 00:21:53.009 ================================ 00:21:53.009 Supported: No 00:21:53.009 00:21:53.009 Admin Command Set Attributes 00:21:53.009 ============================ 00:21:53.009 Security Send/Receive: Not Supported 00:21:53.009 Format NVM: Not Supported 00:21:53.009 Firmware Activate/Download: Not Supported 00:21:53.009 Namespace Management: Not Supported 00:21:53.009 Device Self-Test: Not Supported 00:21:53.009 Directives: Not Supported 00:21:53.009 NVMe-MI: Not Supported 00:21:53.009 Virtualization Management: Not Supported 00:21:53.009 Doorbell Buffer Config: Not Supported 00:21:53.009 Get LBA Status Capability: Not Supported 00:21:53.009 Command & Feature Lockdown Capability: Not Supported 00:21:53.009 Abort Command Limit: 4 00:21:53.009 Async Event Request Limit: 4 00:21:53.009 Number of Firmware Slots: N/A 00:21:53.009 Firmware Slot 1 Read-Only: N/A 00:21:53.009 Firmware Activation Without Reset: N/A 00:21:53.009 Multiple Update Detection Support: N/A 00:21:53.009 Firmware Update Granularity: No Information Provided 00:21:53.009 Per-Namespace SMART Log: No 00:21:53.009 Asymmetric Namespace Access Log Page: Not Supported 00:21:53.009 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:53.009 Command Effects Log Page: Supported 00:21:53.009 Get Log Page Extended Data: Supported 00:21:53.009 Telemetry Log Pages: Not Supported 00:21:53.009 Persistent Event Log Pages: Not Supported 00:21:53.009 Supported Log Pages Log Page: May Support 00:21:53.009 Commands Supported & Effects Log Page: Not Supported 00:21:53.009 Feature Identifiers & Effects Log Page:May Support 00:21:53.009 NVMe-MI Commands & Effects Log Page: May Support 00:21:53.009 Data Area 4 for Telemetry Log: Not Supported 00:21:53.009 Error Log Page Entries Supported: 128 00:21:53.009 Keep Alive: Supported 00:21:53.009 Keep Alive Granularity: 10000 ms 00:21:53.009 00:21:53.009 NVM Command Set Attributes 00:21:53.009 ========================== 00:21:53.009 Submission Queue Entry Size 00:21:53.009 Max: 64 00:21:53.009 Min: 64 00:21:53.009 Completion Queue Entry Size 00:21:53.009 Max: 16 00:21:53.009 Min: 16 00:21:53.009 Number of Namespaces: 32 00:21:53.009 Compare Command: Supported 00:21:53.009 Write Uncorrectable Command: Not Supported 00:21:53.009 Dataset Management Command: Supported 00:21:53.009 Write Zeroes Command: Supported 00:21:53.009 Set Features Save Field: Not Supported 00:21:53.009 Reservations: Supported 00:21:53.009 Timestamp: Not Supported 00:21:53.009 Copy: Supported 00:21:53.009 Volatile Write Cache: Present 00:21:53.009 Atomic Write Unit (Normal): 1 00:21:53.009 Atomic Write Unit (PFail): 1 00:21:53.009 Atomic Compare & Write Unit: 1 00:21:53.009 Fused Compare & Write: Supported 00:21:53.010 Scatter-Gather List 00:21:53.010 SGL Command Set: Supported 00:21:53.010 SGL Keyed: Supported 00:21:53.010 SGL Bit Bucket Descriptor: Not Supported 00:21:53.010 SGL Metadata Pointer: Not Supported 00:21:53.010 Oversized SGL: Not Supported 00:21:53.010 SGL Metadata Address: Not Supported 00:21:53.010 SGL Offset: Supported 00:21:53.010 Transport SGL Data Block: Not Supported 00:21:53.010 Replay Protected Memory Block: Not Supported 00:21:53.010 00:21:53.010 Firmware Slot Information 00:21:53.010 ========================= 00:21:53.010 Active slot: 1 00:21:53.010 Slot 1 Firmware Revision: 24.09 00:21:53.010 00:21:53.010 00:21:53.010 Commands Supported and Effects 00:21:53.010 ============================== 00:21:53.010 Admin Commands 00:21:53.010 -------------- 00:21:53.010 Get Log Page (02h): Supported 00:21:53.010 Identify (06h): Supported 00:21:53.010 Abort (08h): Supported 00:21:53.010 Set Features (09h): Supported 00:21:53.010 Get Features (0Ah): Supported 00:21:53.010 Asynchronous Event Request (0Ch): Supported 00:21:53.010 Keep Alive (18h): Supported 00:21:53.010 I/O Commands 00:21:53.010 ------------ 00:21:53.010 Flush (00h): Supported LBA-Change 00:21:53.010 Write (01h): Supported LBA-Change 00:21:53.010 Read (02h): Supported 00:21:53.010 Compare (05h): Supported 00:21:53.010 Write Zeroes (08h): Supported LBA-Change 00:21:53.010 Dataset Management (09h): Supported LBA-Change 00:21:53.010 Copy (19h): Supported LBA-Change 00:21:53.010 00:21:53.010 Error Log 00:21:53.010 ========= 00:21:53.010 00:21:53.010 Arbitration 00:21:53.010 =========== 00:21:53.010 Arbitration Burst: 1 00:21:53.010 00:21:53.010 Power Management 00:21:53.010 ================ 00:21:53.010 Number of Power States: 1 00:21:53.010 Current Power State: Power State #0 00:21:53.010 Power State #0: 00:21:53.010 Max Power: 0.00 W 00:21:53.010 Non-Operational State: Operational 00:21:53.010 Entry Latency: Not Reported 00:21:53.010 Exit Latency: Not Reported 00:21:53.010 Relative Read Throughput: 0 00:21:53.010 Relative Read Latency: 0 00:21:53.010 Relative Write Throughput: 0 00:21:53.010 Relative Write Latency: 0 00:21:53.010 Idle Power: Not Reported 00:21:53.010 Active Power: Not Reported 00:21:53.010 Non-Operational Permissive Mode: Not Supported 00:21:53.010 00:21:53.010 Health Information 00:21:53.010 ================== 00:21:53.010 Critical Warnings: 00:21:53.010 Available Spare Space: OK 00:21:53.010 Temperature: OK 00:21:53.010 Device Reliability: OK 00:21:53.010 Read Only: No 00:21:53.010 Volatile Memory Backup: OK 00:21:53.010 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:53.010 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:53.010 Available Spare: 0% 00:21:53.010 Available Spare Threshold: 0% 00:21:53.010 Life Percentage Used:[2024-07-26 14:03:20.375209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b7dec0) 00:21:53.010 [2024-07-26 14:03:20.375220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.010 [2024-07-26 14:03:20.375233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c018c0, cid 7, qid 0 00:21:53.010 [2024-07-26 14:03:20.375501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.010 [2024-07-26 14:03:20.375510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.010 [2024-07-26 14:03:20.375513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c018c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.375549] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:53.010 [2024-07-26 14:03:20.375561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00e40) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.375566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.010 [2024-07-26 14:03:20.375571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00fc0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.375575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.010 [2024-07-26 14:03:20.375579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01140) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.375583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.010 [2024-07-26 14:03:20.375587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.375591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.010 [2024-07-26 14:03:20.375598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.010 [2024-07-26 14:03:20.375610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.010 [2024-07-26 14:03:20.375623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.010 [2024-07-26 14:03:20.375787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.010 [2024-07-26 14:03:20.375797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.010 [2024-07-26 14:03:20.375800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.375810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.375817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.010 [2024-07-26 14:03:20.375823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.010 [2024-07-26 14:03:20.375839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.010 [2024-07-26 14:03:20.376012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.010 [2024-07-26 14:03:20.376021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.010 [2024-07-26 14:03:20.376024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.376032] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:53.010 [2024-07-26 14:03:20.376036] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:53.010 [2024-07-26 14:03:20.376053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.010 [2024-07-26 14:03:20.376066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.010 [2024-07-26 14:03:20.376079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.010 [2024-07-26 14:03:20.376239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.010 [2024-07-26 14:03:20.376252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.010 [2024-07-26 14:03:20.376255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.376269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.010 [2024-07-26 14:03:20.376282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.010 [2024-07-26 14:03:20.376294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.010 [2024-07-26 14:03:20.376453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.010 [2024-07-26 14:03:20.376462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.010 [2024-07-26 14:03:20.376465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.376479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.010 [2024-07-26 14:03:20.376492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.010 [2024-07-26 14:03:20.376504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.010 [2024-07-26 14:03:20.376663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.010 [2024-07-26 14:03:20.376673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.010 [2024-07-26 14:03:20.376676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.010 [2024-07-26 14:03:20.376690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.010 [2024-07-26 14:03:20.376697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.376703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.376715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.376876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.376885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.376888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.376891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.376902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.376906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.376909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.376915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.376928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.377309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.377315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.377320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.377333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.377344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.377355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.377519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.377528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.377531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.377545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.377558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.377570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.377729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.377739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.377742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.377756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.377769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.377780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.377939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.377949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.377952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.377966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.377972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.377979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.377991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.378158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.378168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.378171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.378188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.378201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.378213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.378372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.378381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.378384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.378398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.378411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.378423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.378579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.378588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.378591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.378605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.378618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.378629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.378791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.378800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.378803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.378818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.378824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.378830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.378842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.379003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.379012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.379015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.379019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.379032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.379036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.379039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7dec0) 00:21:53.011 [2024-07-26 14:03:20.383049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.011 [2024-07-26 14:03:20.383064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c012c0, cid 3, qid 0 00:21:53.011 [2024-07-26 14:03:20.383319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.011 [2024-07-26 14:03:20.383329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.011 [2024-07-26 14:03:20.383332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.011 [2024-07-26 14:03:20.383336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c012c0) on tqpair=0x1b7dec0 00:21:53.011 [2024-07-26 14:03:20.383344] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:53.011 0% 00:21:53.011 Data Units Read: 0 00:21:53.011 Data Units Written: 0 00:21:53.011 Host Read Commands: 0 00:21:53.011 Host Write Commands: 0 00:21:53.011 Controller Busy Time: 0 minutes 00:21:53.011 Power Cycles: 0 00:21:53.011 Power On Hours: 0 hours 00:21:53.011 Unsafe Shutdowns: 0 00:21:53.011 Unrecoverable Media Errors: 0 00:21:53.011 Lifetime Error Log Entries: 0 00:21:53.011 Warning Temperature Time: 0 minutes 00:21:53.011 Critical Temperature Time: 0 minutes 00:21:53.011 00:21:53.011 Number of Queues 00:21:53.011 ================ 00:21:53.011 Number of I/O Submission Queues: 127 00:21:53.011 Number of I/O Completion Queues: 127 00:21:53.011 00:21:53.011 Active Namespaces 00:21:53.012 ================= 00:21:53.012 Namespace ID:1 00:21:53.012 Error Recovery Timeout: Unlimited 00:21:53.012 Command Set Identifier: NVM (00h) 00:21:53.012 Deallocate: Supported 00:21:53.012 Deallocated/Unwritten Error: Not Supported 00:21:53.012 Deallocated Read Value: Unknown 00:21:53.012 Deallocate in Write Zeroes: Not Supported 00:21:53.012 Deallocated Guard Field: 0xFFFF 00:21:53.012 Flush: Supported 00:21:53.012 Reservation: Supported 00:21:53.012 Namespace Sharing Capabilities: Multiple Controllers 00:21:53.012 Size (in LBAs): 131072 (0GiB) 00:21:53.012 Capacity (in LBAs): 131072 (0GiB) 00:21:53.012 Utilization (in LBAs): 131072 (0GiB) 00:21:53.012 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:53.012 EUI64: ABCDEF0123456789 00:21:53.012 UUID: 0e8f332e-e6d9-41cb-b34f-dcf5f2adb34b 00:21:53.012 Thin Provisioning: Not Supported 00:21:53.012 Per-NS Atomic Units: Yes 00:21:53.012 Atomic Boundary Size (Normal): 0 00:21:53.012 Atomic Boundary Size (PFail): 0 00:21:53.012 Atomic Boundary Offset: 0 00:21:53.012 Maximum Single Source Range Length: 65535 00:21:53.012 Maximum Copy Length: 65535 00:21:53.012 Maximum Source Range Count: 1 00:21:53.012 NGUID/EUI64 Never Reused: No 00:21:53.012 Namespace Write Protected: No 00:21:53.012 Number of LBA Formats: 1 00:21:53.012 Current LBA Format: LBA Format #00 00:21:53.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:53.012 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.012 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.012 rmmod nvme_tcp 00:21:53.272 rmmod nvme_fabrics 00:21:53.272 rmmod nvme_keyring 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3038962 ']' 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3038962 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3038962 ']' 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3038962 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3038962 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3038962' 00:21:53.272 killing process with pid 3038962 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3038962 00:21:53.272 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3038962 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.532 14:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.443 00:21:55.443 real 0m8.836s 00:21:55.443 user 0m7.395s 00:21:55.443 sys 0m4.153s 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:55.443 ************************************ 00:21:55.443 END TEST nvmf_identify 00:21:55.443 ************************************ 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.443 ************************************ 00:21:55.443 START TEST nvmf_perf 00:21:55.443 ************************************ 00:21:55.443 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:55.704 * Looking for test storage... 00:21:55.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.704 14:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.988 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.989 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.989 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:22:00.989 00:22:00.989 --- 10.0.0.2 ping statistics --- 00:22:00.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.989 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.435 ms 00:22:00.989 00:22:00.989 --- 10.0.0.1 ping statistics --- 00:22:00.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.989 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3042687 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3042687 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3042687 ']' 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.989 14:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:01.250 [2024-07-26 14:03:28.457406] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:22:01.250 [2024-07-26 14:03:28.457453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.250 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.250 [2024-07-26 14:03:28.514189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.250 [2024-07-26 14:03:28.595179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.250 [2024-07-26 14:03:28.595216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.250 [2024-07-26 14:03:28.595223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.250 [2024-07-26 14:03:28.595230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.250 [2024-07-26 14:03:28.595235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.250 [2024-07-26 14:03:28.595269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.250 [2024-07-26 14:03:28.595365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.250 [2024-07-26 14:03:28.595442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.250 [2024-07-26 14:03:28.595443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:02.190 14:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.486 [2024-07-26 14:03:32.871821] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.486 14:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.746 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:05.746 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:06.006 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:06.006 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:06.265 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.265 [2024-07-26 14:03:33.607348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.265 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:06.526 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:06.526 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:06.526 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:06.526 14:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:07.909 Initializing NVMe Controllers 00:22:07.909 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:07.909 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:07.909 Initialization complete. Launching workers. 00:22:07.909 ======================================================== 00:22:07.909 Latency(us) 00:22:07.909 Device Information : IOPS MiB/s Average min max 00:22:07.909 PCIE (0000:5e:00.0) NSID 1 from core 0: 97977.61 382.73 326.12 10.67 4389.80 00:22:07.909 ======================================================== 00:22:07.909 Total : 97977.61 382.73 326.12 10.67 4389.80 00:22:07.909 00:22:07.909 14:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:07.909 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.292 Initializing NVMe Controllers 00:22:09.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:09.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:09.292 Initialization complete. Launching workers. 00:22:09.292 ======================================================== 00:22:09.292 Latency(us) 00:22:09.292 Device Information : IOPS MiB/s Average min max 00:22:09.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 56.00 0.22 18410.27 242.60 45390.62 00:22:09.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17955.33 7956.49 47901.88 00:22:09.292 ======================================================== 00:22:09.292 Total : 112.00 0.44 18182.80 242.60 47901.88 00:22:09.292 00:22:09.292 14:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:09.292 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.676 Initializing NVMe Controllers 00:22:10.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:10.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:10.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:10.676 Initialization complete. Launching workers. 00:22:10.676 ======================================================== 00:22:10.676 Latency(us) 00:22:10.676 Device Information : IOPS MiB/s Average min max 00:22:10.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7186.00 28.07 4461.57 739.34 11152.05 00:22:10.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3829.00 14.96 8402.21 5980.91 16007.74 00:22:10.676 ======================================================== 00:22:10.676 Total : 11015.00 43.03 5831.40 739.34 16007.74 00:22:10.676 00:22:10.676 14:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:10.676 14:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:10.676 14:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:10.676 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.265 Initializing NVMe Controllers 00:22:13.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.265 Controller IO queue size 128, less than required. 00:22:13.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:13.265 Controller IO queue size 128, less than required. 00:22:13.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:13.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:13.265 Initialization complete. Launching workers. 00:22:13.265 ======================================================== 00:22:13.265 Latency(us) 00:22:13.265 Device Information : IOPS MiB/s Average min max 00:22:13.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 771.91 192.98 170646.48 110238.51 244654.32 00:22:13.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.44 142.61 239742.49 101971.34 366033.85 00:22:13.265 ======================================================== 00:22:13.265 Total : 1342.35 335.59 200009.07 101971.34 366033.85 00:22:13.265 00:22:13.265 14:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:13.265 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.265 No valid NVMe controllers or AIO or URING devices found 00:22:13.265 Initializing NVMe Controllers 00:22:13.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.265 Controller IO queue size 128, less than required. 00:22:13.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:13.265 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:13.265 Controller IO queue size 128, less than required. 00:22:13.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:13.265 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:13.265 WARNING: Some requested NVMe devices were skipped 00:22:13.525 14:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:13.525 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.099 Initializing NVMe Controllers 00:22:16.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.099 Controller IO queue size 128, less than required. 00:22:16.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.099 Controller IO queue size 128, less than required. 00:22:16.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:16.099 Initialization complete. Launching workers. 00:22:16.099 00:22:16.099 ==================== 00:22:16.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:16.099 TCP transport: 00:22:16.099 polls: 64623 00:22:16.099 idle_polls: 20457 00:22:16.099 sock_completions: 44166 00:22:16.099 nvme_completions: 3003 00:22:16.099 submitted_requests: 4502 00:22:16.099 queued_requests: 1 00:22:16.099 00:22:16.099 ==================== 00:22:16.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:16.099 TCP transport: 00:22:16.099 polls: 71585 00:22:16.099 idle_polls: 20989 00:22:16.099 sock_completions: 50596 00:22:16.099 nvme_completions: 3107 00:22:16.099 submitted_requests: 4642 00:22:16.099 queued_requests: 1 00:22:16.099 ======================================================== 00:22:16.099 Latency(us) 00:22:16.099 Device Information : IOPS MiB/s Average min max 00:22:16.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 750.50 187.62 177269.50 90413.26 301331.78 00:22:16.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 776.50 194.12 174349.58 100313.68 257655.53 00:22:16.099 ======================================================== 00:22:16.099 Total : 1527.00 381.75 175784.68 90413.26 301331.78 00:22:16.099 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.099 rmmod nvme_tcp 00:22:16.099 rmmod nvme_fabrics 00:22:16.099 rmmod nvme_keyring 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3042687 ']' 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3042687 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3042687 ']' 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3042687 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.099 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3042687 00:22:16.357 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:16.357 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:16.357 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3042687' 00:22:16.357 killing process with pid 3042687 00:22:16.358 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3042687 00:22:16.358 14:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3042687 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.735 14:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.272 00:22:20.272 real 0m24.251s 00:22:20.272 user 1m6.209s 00:22:20.272 sys 0m6.722s 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:20.272 ************************************ 00:22:20.272 END TEST nvmf_perf 00:22:20.272 ************************************ 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.272 ************************************ 00:22:20.272 START TEST nvmf_fio_host 00:22:20.272 ************************************ 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:20.272 * Looking for test storage... 00:22:20.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.272 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.273 14:03:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:25.556 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.556 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:25.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:25.557 Found net devices under 0000:86:00.0: cvl_0_0 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:25.557 Found net devices under 0000:86:00.1: cvl_0_1 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:22:25.557 00:22:25.557 --- 10.0.0.2 ping statistics --- 00:22:25.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.557 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:22:25.557 00:22:25.557 --- 10.0.0.1 ping statistics --- 00:22:25.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.557 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3048819 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3048819 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3048819 ']' 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.557 14:03:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.557 [2024-07-26 14:03:52.845429] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:22:25.557 [2024-07-26 14:03:52.845484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.557 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.557 [2024-07-26 14:03:52.903184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.557 [2024-07-26 14:03:52.983230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.557 [2024-07-26 14:03:52.983269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.557 [2024-07-26 14:03:52.983276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.557 [2024-07-26 14:03:52.983282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.557 [2024-07-26 14:03:52.983287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.557 [2024-07-26 14:03:52.983331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.557 [2024-07-26 14:03:52.983429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.557 [2024-07-26 14:03:52.983516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.557 [2024-07-26 14:03:52.983518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:26.496 [2024-07-26 14:03:53.847934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.496 14:03:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:26.755 Malloc1 00:22:26.755 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.014 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.275 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.275 [2024-07-26 14:03:54.625825] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.275 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:27.535 14:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:27.795 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:27.795 fio-3.35 00:22:27.795 Starting 1 thread 00:22:27.795 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.335 00:22:30.335 test: (groupid=0, jobs=1): err= 0: pid=3049416: Fri Jul 26 14:03:57 2024 00:22:30.335 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(87.5MiB/2003msec) 00:22:30.335 slat (nsec): min=1594, max=243079, avg=1725.18, stdev=2256.14 00:22:30.335 clat (usec): min=3037, max=22783, avg=6814.23, stdev=1698.15 00:22:30.335 lat (usec): min=3039, max=22794, avg=6815.96, stdev=1698.58 00:22:30.335 clat percentiles (usec): 00:22:30.335 | 1.00th=[ 4359], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 5669], 00:22:30.335 | 30.00th=[ 5932], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6718], 00:22:30.335 | 70.00th=[ 7177], 80.00th=[ 7767], 90.00th=[ 8717], 95.00th=[ 9765], 00:22:30.335 | 99.00th=[12387], 99.50th=[13173], 99.90th=[22414], 99.95th=[22676], 00:22:30.335 | 99.99th=[22676] 00:22:30.335 bw ( KiB/s): min=42088, max=46312, per=99.76%, avg=44620.00, stdev=1805.05, samples=4 00:22:30.335 iops : min=10522, max=11578, avg=11155.00, stdev=451.26, samples=4 00:22:30.335 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(87.2MiB/2003msec); 0 zone resets 00:22:30.335 slat (nsec): min=1618, max=236625, avg=1806.39, stdev=1747.47 00:22:30.335 clat (usec): min=1891, max=21303, avg=4599.00, stdev=1020.26 00:22:30.335 lat (usec): min=1892, max=21309, avg=4600.81, stdev=1020.66 00:22:30.335 clat percentiles (usec): 00:22:30.335 | 1.00th=[ 2704], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3851], 00:22:30.335 | 30.00th=[ 4146], 40.00th=[ 4424], 50.00th=[ 4621], 60.00th=[ 4752], 00:22:30.335 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5473], 95.00th=[ 5997], 00:22:30.335 | 99.00th=[ 7635], 99.50th=[ 8848], 99.90th=[15401], 99.95th=[16909], 00:22:30.335 | 99.99th=[17957] 00:22:30.335 bw ( KiB/s): min=42600, max=45456, per=99.97%, avg=44556.00, stdev=1337.17, samples=4 00:22:30.335 iops : min=10650, max=11364, avg=11139.00, stdev=334.29, samples=4 00:22:30.335 lat (msec) : 2=0.01%, 4=12.47%, 10=85.24%, 20=2.15%, 50=0.14% 00:22:30.335 cpu : usr=71.98%, sys=22.48%, ctx=20, majf=0, minf=5 00:22:30.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:30.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:30.335 issued rwts: total=22398,22317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:30.335 00:22:30.335 Run status group 0 (all jobs): 00:22:30.335 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=87.5MiB (91.7MB), run=2003-2003msec 00:22:30.335 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=87.2MiB (91.4MB), run=2003-2003msec 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:30.335 14:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:30.595 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:30.595 fio-3.35 00:22:30.595 Starting 1 thread 00:22:30.595 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.135 00:22:33.135 test: (groupid=0, jobs=1): err= 0: pid=3049943: Fri Jul 26 14:04:00 2024 00:22:33.135 read: IOPS=8575, BW=134MiB/s (141MB/s)(269MiB/2007msec) 00:22:33.135 slat (usec): min=2, max=105, avg= 2.85, stdev= 1.49 00:22:33.135 clat (usec): min=2381, max=52629, avg=9167.37, stdev=3847.96 00:22:33.135 lat (usec): min=2384, max=52632, avg=9170.22, stdev=3848.40 00:22:33.135 clat percentiles (usec): 00:22:33.135 | 1.00th=[ 4293], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6915], 00:22:33.135 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8979], 00:22:33.135 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11600], 95.00th=[12911], 00:22:33.135 | 99.00th=[27132], 99.50th=[31589], 99.90th=[32900], 99.95th=[32900], 00:22:33.135 | 99.99th=[45351] 00:22:33.135 bw ( KiB/s): min=63264, max=78368, per=51.37%, avg=70490.50, stdev=7148.36, samples=4 00:22:33.135 iops : min= 3954, max= 4898, avg=4405.50, stdev=446.65, samples=4 00:22:33.135 write: IOPS=5105, BW=79.8MiB/s (83.7MB/s)(143MiB/1797msec); 0 zone resets 00:22:33.135 slat (usec): min=30, max=380, avg=31.95, stdev= 7.69 00:22:33.135 clat (usec): min=4156, max=40565, avg=9856.39, stdev=3955.69 00:22:33.135 lat (usec): min=4188, max=40600, avg=9888.34, stdev=3959.32 00:22:33.135 clat percentiles (usec): 00:22:33.135 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8094], 00:22:33.135 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:22:33.135 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11469], 95.00th=[12518], 00:22:33.135 | 99.00th=[32900], 99.50th=[33162], 99.90th=[36963], 99.95th=[37487], 00:22:33.135 | 99.99th=[40633] 00:22:33.135 bw ( KiB/s): min=66208, max=80896, per=89.80%, avg=73360.75, stdev=7189.12, samples=4 00:22:33.135 iops : min= 4138, max= 5056, avg=4585.00, stdev=449.28, samples=4 00:22:33.135 lat (msec) : 4=0.39%, 10=73.38%, 20=23.01%, 50=3.22%, 100=0.01% 00:22:33.135 cpu : usr=86.44%, sys=10.77%, ctx=16, majf=0, minf=2 00:22:33.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:33.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:33.135 issued rwts: total=17211,9175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:33.135 00:22:33.135 Run status group 0 (all jobs): 00:22:33.135 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=269MiB (282MB), run=2007-2007msec 00:22:33.135 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=143MiB (150MB), run=1797-1797msec 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.135 rmmod nvme_tcp 00:22:33.135 rmmod nvme_fabrics 00:22:33.135 rmmod nvme_keyring 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3048819 ']' 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3048819 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3048819 ']' 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3048819 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3048819 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3048819' 00:22:33.135 killing process with pid 3048819 00:22:33.135 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3048819 00:22:33.136 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3048819 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.396 14:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.938 00:22:35.938 real 0m15.660s 00:22:35.938 user 0m47.494s 00:22:35.938 sys 0m5.976s 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.938 ************************************ 00:22:35.938 END TEST nvmf_fio_host 00:22:35.938 ************************************ 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.938 ************************************ 00:22:35.938 START TEST nvmf_failover 00:22:35.938 ************************************ 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:35.938 * Looking for test storage... 00:22:35.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.938 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.939 14:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.939 14:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:41.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:41.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:41.222 Found net devices under 0000:86:00.0: cvl_0_0 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:41.222 Found net devices under 0000:86:00.1: cvl_0_1 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.222 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:22:41.223 00:22:41.223 --- 10.0.0.2 ping statistics --- 00:22:41.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.223 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:22:41.223 00:22:41.223 --- 10.0.0.1 ping statistics --- 00:22:41.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.223 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3053739 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3053739 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3053739 ']' 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.223 14:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:41.223 [2024-07-26 14:04:08.508197] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:22:41.223 [2024-07-26 14:04:08.508240] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.223 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.223 [2024-07-26 14:04:08.565813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:41.223 [2024-07-26 14:04:08.645830] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.223 [2024-07-26 14:04:08.645868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.223 [2024-07-26 14:04:08.645875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.223 [2024-07-26 14:04:08.645881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.223 [2024-07-26 14:04:08.645887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.223 [2024-07-26 14:04:08.645985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.223 [2024-07-26 14:04:08.646092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.223 [2024-07-26 14:04:08.646094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:42.163 [2024-07-26 14:04:09.511166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.163 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:42.502 Malloc0 00:22:42.502 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.762 14:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:42.762 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.022 [2024-07-26 14:04:10.259259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.022 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:43.022 [2024-07-26 14:04:10.443738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:43.281 [2024-07-26 14:04:10.632366] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3054134 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3054134 /var/tmp/bdevperf.sock 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3054134 ']' 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.281 14:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.218 14:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.218 14:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:44.218 14:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.478 NVMe0n1 00:22:44.738 14:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.998 00:22:44.998 14:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3054461 00:22:44.998 14:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:44.998 14:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:45.949 14:04:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.216 [2024-07-26 14:04:13.486616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.216 [2024-07-26 14:04:13.486809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.217 [2024-07-26 14:04:13.486814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.217 [2024-07-26 14:04:13.486820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.217 [2024-07-26 14:04:13.486826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.217 [2024-07-26 14:04:13.486831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.217 [2024-07-26 14:04:13.486837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231df50 is same with the state(5) to be set 00:22:46.217 14:04:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:49.507 14:04:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.507 00:22:49.507 14:04:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:49.766 [2024-07-26 14:04:16.948280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.766 [2024-07-26 14:04:16.948383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.767 [2024-07-26 14:04:16.948885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.948998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 [2024-07-26 14:04:16.949085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ed70 is same with the state(5) to be set 00:22:49.768 14:04:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:53.059 14:04:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.059 [2024-07-26 14:04:20.150871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.059 14:04:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:54.009 14:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:54.009 [2024-07-26 14:04:21.353683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 [2024-07-26 14:04:21.353799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8b40 is same with the state(5) to be set 00:22:54.009 14:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3054461 00:23:00.591 0 00:23:00.591 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3054134 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3054134 ']' 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3054134 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3054134 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3054134' 00:23:00.592 killing process with pid 3054134 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3054134 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3054134 00:23:00.592 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:00.592 [2024-07-26 14:04:10.706725] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:23:00.592 [2024-07-26 14:04:10.706780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054134 ] 00:23:00.592 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.592 [2024-07-26 14:04:10.761407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.592 [2024-07-26 14:04:10.835716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.592 Running I/O for 15 seconds... 00:23:00.592 [2024-07-26 14:04:13.488728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.488985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.488993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.488999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.592 [2024-07-26 14:04:13.489138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.592 [2024-07-26 14:04:13.489240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.592 [2024-07-26 14:04:13.489247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.593 [2024-07-26 14:04:13.489791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.593 [2024-07-26 14:04:13.489798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.489990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.489998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.490004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.490020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.490038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.490057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.594 [2024-07-26 14:04:13.490072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88640 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88648 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88656 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88664 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88672 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88680 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.594 [2024-07-26 14:04:13.490355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:23:00.594 [2024-07-26 14:04:13.490362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.594 [2024-07-26 14:04:13.490368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.594 [2024-07-26 14:04:13.490372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89488 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89496 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89504 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89520 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89528 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89536 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88688 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88696 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88704 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88712 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88720 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88728 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88736 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.490856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.490860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.490866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88744 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.490872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.501264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.501273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.595 [2024-07-26 14:04:13.501280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89544 len:8 PRP1 0x0 PRP2 0x0 00:23:00.595 [2024-07-26 14:04:13.501288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.595 [2024-07-26 14:04:13.501295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.595 [2024-07-26 14:04:13.501302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.596 [2024-07-26 14:04:13.501308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89552 len:8 PRP1 0x0 PRP2 0x0 00:23:00.596 [2024-07-26 14:04:13.501314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.596 [2024-07-26 14:04:13.501326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.596 [2024-07-26 14:04:13.501332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89560 len:8 PRP1 0x0 PRP2 0x0 00:23:00.596 [2024-07-26 14:04:13.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.596 [2024-07-26 14:04:13.501351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.596 [2024-07-26 14:04:13.501356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89568 len:8 PRP1 0x0 PRP2 0x0 00:23:00.596 [2024-07-26 14:04:13.501363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501403] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1517470 was disconnected and freed. reset controller. 00:23:00.596 [2024-07-26 14:04:13.501412] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:00.596 [2024-07-26 14:04:13.501433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.596 [2024-07-26 14:04:13.501440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.596 [2024-07-26 14:04:13.501454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.596 [2024-07-26 14:04:13.501468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.596 [2024-07-26 14:04:13.501481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:13.501487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.596 [2024-07-26 14:04:13.501523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1524540 (9): Bad file descriptor 00:23:00.596 [2024-07-26 14:04:13.504637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.596 [2024-07-26 14:04:13.542861] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.596 [2024-07-26 14:04:16.949878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.949912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.949926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.949933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.949942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.949949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.949958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.949964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.949976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.949983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.949991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.949997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.596 [2024-07-26 14:04:16.950203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.596 [2024-07-26 14:04:16.950210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.597 [2024-07-26 14:04:16.950756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.597 [2024-07-26 14:04:16.950763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.598 [2024-07-26 14:04:16.950770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.950986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.950993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.598 [2024-07-26 14:04:16.951329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.598 [2024-07-26 14:04:16.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.599 [2024-07-26 14:04:16.951743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.599 [2024-07-26 14:04:16.951766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.599 [2024-07-26 14:04:16.951772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:8 PRP1 0x0 PRP2 0x0 00:23:00.599 [2024-07-26 14:04:16.951778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951818] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15483f0 was disconnected and freed. reset controller. 00:23:00.599 [2024-07-26 14:04:16.951828] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:00.599 [2024-07-26 14:04:16.951846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.599 [2024-07-26 14:04:16.951854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.599 [2024-07-26 14:04:16.951869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.599 [2024-07-26 14:04:16.951883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.599 [2024-07-26 14:04:16.951896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:16.951902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.599 [2024-07-26 14:04:16.951922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1524540 (9): Bad file descriptor 00:23:00.599 [2024-07-26 14:04:16.954763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.599 [2024-07-26 14:04:16.991446] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.599 [2024-07-26 14:04:21.353440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.599 [2024-07-26 14:04:21.353485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:21.353494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.599 [2024-07-26 14:04:21.353502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.599 [2024-07-26 14:04:21.353509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.600 [2024-07-26 14:04:21.353516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.353523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.600 [2024-07-26 14:04:21.353530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.353537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524540 is same with the state(5) to be set 00:23:00.600 [2024-07-26 14:04:21.354398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.600 [2024-07-26 14:04:21.354952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.600 [2024-07-26 14:04:21.354960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.354966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.354974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.354980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.354988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.354994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.601 [2024-07-26 14:04:21.355232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.601 [2024-07-26 14:04:21.355427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.601 [2024-07-26 14:04:21.355434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.355926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.355984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.356003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.356022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.356040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.356066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.602 [2024-07-26 14:04:21.356092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.356111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.356132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.356152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.356170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.602 [2024-07-26 14:04:21.356190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.602 [2024-07-26 14:04:21.356201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.603 [2024-07-26 14:04:21.356210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.603 [2024-07-26 14:04:21.356228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.603 [2024-07-26 14:04:21.356247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.603 [2024-07-26 14:04:21.356559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.603 [2024-07-26 14:04:21.356585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.603 [2024-07-26 14:04:21.356590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:8 PRP1 0x0 PRP2 0x0 00:23:00.603 [2024-07-26 14:04:21.356596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.603 [2024-07-26 14:04:21.356637] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15480b0 was disconnected and freed. reset controller. 00:23:00.603 [2024-07-26 14:04:21.356646] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:00.603 [2024-07-26 14:04:21.356653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.603 [2024-07-26 14:04:21.359507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.603 [2024-07-26 14:04:21.359540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1524540 (9): Bad file descriptor 00:23:00.603 [2024-07-26 14:04:21.389168] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.603 00:23:00.603 Latency(us) 00:23:00.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.603 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:00.603 Verification LBA range: start 0x0 length 0x4000 00:23:00.603 NVMe0n1 : 15.01 10747.51 41.98 318.62 0.00 11544.28 1709.63 35788.35 00:23:00.603 =================================================================================================================== 00:23:00.603 Total : 10747.51 41.98 318.62 0.00 11544.28 1709.63 35788.35 00:23:00.603 Received shutdown signal, test time was about 15.000000 seconds 00:23:00.603 00:23:00.603 Latency(us) 00:23:00.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.603 =================================================================================================================== 00:23:00.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3056931 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3056931 /var/tmp/bdevperf.sock 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3056931 ']' 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.603 14:04:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.174 14:04:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.174 14:04:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:01.174 14:04:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:01.433 [2024-07-26 14:04:28.705834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.433 14:04:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:01.693 [2024-07-26 14:04:28.894392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:01.693 14:04:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.953 NVMe0n1 00:23:01.953 14:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.213 00:23:02.213 14:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.474 00:23:02.474 14:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.474 14:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:02.734 14:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.734 14:04:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:06.033 14:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.033 14:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:06.033 14:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3057842 00:23:06.033 14:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.033 14:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3057842 00:23:07.415 0 00:23:07.415 14:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:07.415 [2024-07-26 14:04:27.744908] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:23:07.415 [2024-07-26 14:04:27.744962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056931 ] 00:23:07.415 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.415 [2024-07-26 14:04:27.800016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.415 [2024-07-26 14:04:27.870400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.416 [2024-07-26 14:04:30.104774] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:07.416 [2024-07-26 14:04:30.104835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.416 [2024-07-26 14:04:30.104847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.416 [2024-07-26 14:04:30.104856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.416 [2024-07-26 14:04:30.104863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.416 [2024-07-26 14:04:30.104870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.416 [2024-07-26 14:04:30.104877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.416 [2024-07-26 14:04:30.104884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.416 [2024-07-26 14:04:30.104890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.416 [2024-07-26 14:04:30.104897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.416 [2024-07-26 14:04:30.104925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.416 [2024-07-26 14:04:30.104941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88540 (9): Bad file descriptor 00:23:07.416 [2024-07-26 14:04:30.115785] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:07.416 Running I/O for 1 seconds... 00:23:07.416 00:23:07.416 Latency(us) 00:23:07.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.416 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:07.416 Verification LBA range: start 0x0 length 0x4000 00:23:07.416 NVMe0n1 : 1.01 10999.77 42.97 0.00 0.00 11558.12 1966.08 31229.33 00:23:07.416 =================================================================================================================== 00:23:07.416 Total : 10999.77 42.97 0.00 0.00 11558.12 1966.08 31229.33 00:23:07.416 14:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.416 14:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:07.416 14:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.416 14:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.416 14:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:07.721 14:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.981 14:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3056931 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3056931 ']' 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3056931 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3056931 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3056931' 00:23:11.280 killing process with pid 3056931 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3056931 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3056931 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:11.280 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.540 rmmod nvme_tcp 00:23:11.540 rmmod nvme_fabrics 00:23:11.540 rmmod nvme_keyring 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3053739 ']' 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3053739 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3053739 ']' 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3053739 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3053739 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3053739' 00:23:11.540 killing process with pid 3053739 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3053739 00:23:11.540 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3053739 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.800 14:04:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.342 00:23:14.342 real 0m38.315s 00:23:14.342 user 2m3.780s 00:23:14.342 sys 0m7.360s 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.342 ************************************ 00:23:14.342 END TEST nvmf_failover 00:23:14.342 ************************************ 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.342 ************************************ 00:23:14.342 START TEST nvmf_host_discovery 00:23:14.342 ************************************ 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:14.342 * Looking for test storage... 00:23:14.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.342 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.625 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:19.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:19.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:19.626 Found net devices under 0000:86:00.0: cvl_0_0 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:19.626 Found net devices under 0000:86:00.1: cvl_0_1 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:23:19.626 00:23:19.626 --- 10.0.0.2 ping statistics --- 00:23:19.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.626 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:23:19.626 00:23:19.626 --- 10.0.0.1 ping statistics --- 00:23:19.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.626 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3062131 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3062131 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3062131 ']' 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.626 14:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.626 [2024-07-26 14:04:46.572365] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:23:19.627 [2024-07-26 14:04:46.572408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.627 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.627 [2024-07-26 14:04:46.629203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.627 [2024-07-26 14:04:46.708084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.627 [2024-07-26 14:04:46.708115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.627 [2024-07-26 14:04:46.708122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.627 [2024-07-26 14:04:46.708129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.627 [2024-07-26 14:04:46.708134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.627 [2024-07-26 14:04:46.708150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 [2024-07-26 14:04:47.395716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 [2024-07-26 14:04:47.403845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 null0 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 null1 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3062293 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3062293 /tmp/host.sock 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3062293 ']' 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:20.197 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 14:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:20.197 [2024-07-26 14:04:47.480321] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:23:20.197 [2024-07-26 14:04:47.480363] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062293 ] 00:23:20.197 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.197 [2024-07-26 14:04:47.533968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.197 [2024-07-26 14:04:47.614468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.137 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.137 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:21.137 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.138 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.398 [2024-07-26 14:04:48.615064] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.398 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:21.399 14:04:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:21.969 [2024-07-26 14:04:49.335373] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:21.969 [2024-07-26 14:04:49.335395] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:21.969 [2024-07-26 14:04:49.335409] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:22.229 [2024-07-26 14:04:49.462802] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:22.229 [2024-07-26 14:04:49.529160] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:22.229 [2024-07-26 14:04:49.529179] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.489 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.749 14:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.749 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.010 [2024-07-26 14:04:50.311685] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.010 [2024-07-26 14:04:50.312743] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.010 [2024-07-26 14:04:50.312768] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:23.010 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:23.011 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.011 [2024-07-26 14:04:50.441491] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:23.271 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:23.271 14:04:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:23.271 [2024-07-26 14:04:50.541498] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:23.271 [2024-07-26 14:04:50.541517] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.271 [2024-07-26 14:04:50.541522] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.211 [2024-07-26 14:04:51.576122] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.211 [2024-07-26 14:04:51.576146] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.211 [2024-07-26 14:04:51.579827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.211 [2024-07-26 14:04:51.579847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.211 [2024-07-26 14:04:51.579856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.211 [2024-07-26 14:04:51.579864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.211 [2024-07-26 14:04:51.579872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.211 [2024-07-26 14:04:51.579879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.211 [2024-07-26 14:04:51.579888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.211 [2024-07-26 14:04:51.579895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.211 [2024-07-26 14:04:51.579902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:24.211 [2024-07-26 14:04:51.589840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.211 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.212 [2024-07-26 14:04:51.599878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.212 [2024-07-26 14:04:51.600396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.212 [2024-07-26 14:04:51.600416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.212 [2024-07-26 14:04:51.600424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.212 [2024-07-26 14:04:51.600436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.212 [2024-07-26 14:04:51.600453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.212 [2024-07-26 14:04:51.600460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.212 [2024-07-26 14:04:51.600467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.212 [2024-07-26 14:04:51.600477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.212 [2024-07-26 14:04:51.609932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.212 [2024-07-26 14:04:51.610416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.212 [2024-07-26 14:04:51.610429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.212 [2024-07-26 14:04:51.610436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.212 [2024-07-26 14:04:51.610447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.212 [2024-07-26 14:04:51.610462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.212 [2024-07-26 14:04:51.610469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.212 [2024-07-26 14:04:51.610476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.212 [2024-07-26 14:04:51.610486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.212 [2024-07-26 14:04:51.619982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.212 [2024-07-26 14:04:51.620411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.212 [2024-07-26 14:04:51.620425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.212 [2024-07-26 14:04:51.620432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.212 [2024-07-26 14:04:51.620442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.212 [2024-07-26 14:04:51.620452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.212 [2024-07-26 14:04:51.620459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.212 [2024-07-26 14:04:51.620465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.212 [2024-07-26 14:04:51.620475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.212 [2024-07-26 14:04:51.630035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.212 [2024-07-26 14:04:51.630732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.212 [2024-07-26 14:04:51.630746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.212 [2024-07-26 14:04:51.630754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.212 [2024-07-26 14:04:51.630766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.212 [2024-07-26 14:04:51.630791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.212 [2024-07-26 14:04:51.630798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.212 [2024-07-26 14:04:51.630805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.212 [2024-07-26 14:04:51.630814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:24.212 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:24.212 [2024-07-26 14:04:51.640091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.212 [2024-07-26 14:04:51.640508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.212 [2024-07-26 14:04:51.640521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.212 [2024-07-26 14:04:51.640528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.212 [2024-07-26 14:04:51.640538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.212 [2024-07-26 14:04:51.640547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.212 [2024-07-26 14:04:51.640553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.212 [2024-07-26 14:04:51.640560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.212 [2024-07-26 14:04:51.640569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.473 [2024-07-26 14:04:51.650143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.473 [2024-07-26 14:04:51.650438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.473 [2024-07-26 14:04:51.650451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.473 [2024-07-26 14:04:51.650458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.473 [2024-07-26 14:04:51.650469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.473 [2024-07-26 14:04:51.650486] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.473 [2024-07-26 14:04:51.650496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.473 [2024-07-26 14:04:51.650503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.473 [2024-07-26 14:04:51.650512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.473 [2024-07-26 14:04:51.660196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.473 [2024-07-26 14:04:51.660667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.473 [2024-07-26 14:04:51.660679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebbf30 with addr=10.0.0.2, port=4420 00:23:24.473 [2024-07-26 14:04:51.660685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbf30 is same with the state(5) to be set 00:23:24.473 [2024-07-26 14:04:51.660695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebbf30 (9): Bad file descriptor 00:23:24.473 [2024-07-26 14:04:51.660711] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.473 [2024-07-26 14:04:51.660717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.473 [2024-07-26 14:04:51.660724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.473 [2024-07-26 14:04:51.660732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.473 [2024-07-26 14:04:51.663722] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:24.473 [2024-07-26 14:04:51.663737] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.473 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.474 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.733 14:04:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.672 [2024-07-26 14:04:53.005238] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:25.672 [2024-07-26 14:04:53.005256] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:25.672 [2024-07-26 14:04:53.005270] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.672 [2024-07-26 14:04:53.094531] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:25.931 [2024-07-26 14:04:53.365134] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:25.931 [2024-07-26 14:04:53.365161] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.931 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.931 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:25.931 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.190 request: 00:23:26.190 { 00:23:26.190 "name": "nvme", 00:23:26.190 "trtype": "tcp", 00:23:26.190 "traddr": "10.0.0.2", 00:23:26.190 "adrfam": "ipv4", 00:23:26.190 "trsvcid": "8009", 00:23:26.190 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:26.190 "wait_for_attach": true, 00:23:26.190 "method": "bdev_nvme_start_discovery", 00:23:26.190 "req_id": 1 00:23:26.190 } 00:23:26.190 Got JSON-RPC error response 00:23:26.190 response: 00:23:26.190 { 00:23:26.190 "code": -17, 00:23:26.190 "message": "File exists" 00:23:26.190 } 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.190 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.191 request: 00:23:26.191 { 00:23:26.191 "name": "nvme_second", 00:23:26.191 "trtype": "tcp", 00:23:26.191 "traddr": "10.0.0.2", 00:23:26.191 "adrfam": "ipv4", 00:23:26.191 "trsvcid": "8009", 00:23:26.191 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:26.191 "wait_for_attach": true, 00:23:26.191 "method": "bdev_nvme_start_discovery", 00:23:26.191 "req_id": 1 00:23:26.191 } 00:23:26.191 Got JSON-RPC error response 00:23:26.191 response: 00:23:26.191 { 00:23:26.191 "code": -17, 00:23:26.191 "message": "File exists" 00:23:26.191 } 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.191 14:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.571 [2024-07-26 14:04:54.621188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.571 [2024-07-26 14:04:54.621216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeed2a0 with addr=10.0.0.2, port=8010 00:23:27.571 [2024-07-26 14:04:54.621228] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:27.571 [2024-07-26 14:04:54.621235] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:27.571 [2024-07-26 14:04:54.621241] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:28.508 [2024-07-26 14:04:55.623562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.508 [2024-07-26 14:04:55.623587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeed2a0 with addr=10.0.0.2, port=8010 00:23:28.508 [2024-07-26 14:04:55.623598] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:28.508 [2024-07-26 14:04:55.623604] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:28.508 [2024-07-26 14:04:55.623611] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:29.446 [2024-07-26 14:04:56.625441] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:29.446 request: 00:23:29.446 { 00:23:29.446 "name": "nvme_second", 00:23:29.446 "trtype": "tcp", 00:23:29.446 "traddr": "10.0.0.2", 00:23:29.446 "adrfam": "ipv4", 00:23:29.446 "trsvcid": "8010", 00:23:29.446 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:29.446 "wait_for_attach": false, 00:23:29.446 "attach_timeout_ms": 3000, 00:23:29.446 "method": "bdev_nvme_start_discovery", 00:23:29.446 "req_id": 1 00:23:29.446 } 00:23:29.446 Got JSON-RPC error response 00:23:29.446 response: 00:23:29.446 { 00:23:29.446 "code": -110, 00:23:29.446 "message": "Connection timed out" 00:23:29.446 } 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3062293 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.446 rmmod nvme_tcp 00:23:29.446 rmmod nvme_fabrics 00:23:29.446 rmmod nvme_keyring 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3062131 ']' 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3062131 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3062131 ']' 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3062131 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3062131 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3062131' 00:23:29.446 killing process with pid 3062131 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3062131 00:23:29.446 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3062131 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.707 14:04:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.653 00:23:31.653 real 0m17.762s 00:23:31.653 user 0m22.696s 00:23:31.653 sys 0m5.346s 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.653 ************************************ 00:23:31.653 END TEST nvmf_host_discovery 00:23:31.653 ************************************ 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.653 14:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.914 ************************************ 00:23:31.914 START TEST nvmf_host_multipath_status 00:23:31.914 ************************************ 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:31.914 * Looking for test storage... 00:23:31.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.914 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.915 14:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.198 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.199 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.199 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:37.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:37.199 00:23:37.199 --- 10.0.0.2 ping statistics --- 00:23:37.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.199 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:23:37.199 00:23:37.199 --- 10.0.0.1 ping statistics --- 00:23:37.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.199 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3067417 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3067417 00:23:37.199 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3067417 ']' 00:23:37.200 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.200 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.200 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.200 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.200 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:37.200 14:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:37.200 [2024-07-26 14:05:04.583680] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:23:37.200 [2024-07-26 14:05:04.583723] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.200 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.460 [2024-07-26 14:05:04.641346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:37.460 [2024-07-26 14:05:04.721303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.460 [2024-07-26 14:05:04.721338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.460 [2024-07-26 14:05:04.721345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.460 [2024-07-26 14:05:04.721351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.460 [2024-07-26 14:05:04.721358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.460 [2024-07-26 14:05:04.721394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.460 [2024-07-26 14:05:04.721397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3067417 00:23:38.030 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:38.290 [2024-07-26 14:05:05.582104] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.290 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:38.549 Malloc0 00:23:38.549 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:38.549 14:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:38.809 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.070 [2024-07-26 14:05:06.311790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:39.070 [2024-07-26 14:05:06.476224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3067707 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3067707 /var/tmp/bdevperf.sock 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3067707 ']' 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.070 14:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.028 14:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.028 14:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:40.028 14:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:40.288 14:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:40.548 Nvme0n1 00:23:40.548 14:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:40.808 Nvme0n1 00:23:41.067 14:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:41.067 14:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:42.978 14:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:42.978 14:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:43.238 14:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:43.238 14:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.621 14:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.621 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.621 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.621 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.621 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.882 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.882 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.882 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.882 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.143 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.404 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.404 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:45.404 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:45.668 14:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:45.926 14:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:46.864 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:46.864 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:46.864 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.864 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.123 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.382 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.382 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.382 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.382 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.641 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.641 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:47.641 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.641 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:47.901 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:48.160 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:48.419 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:49.358 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:49.358 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:49.359 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.359 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.618 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.619 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:49.619 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.619 14:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.619 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.619 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.619 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.619 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.879 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.879 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.879 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.879 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:50.138 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.138 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:50.138 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.138 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.397 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.397 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.397 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.397 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.397 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.397 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:50.398 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:50.657 14:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:50.917 14:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:51.856 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:51.856 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.856 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.856 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.145 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.404 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.404 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.404 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.404 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.663 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.663 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.663 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.663 14:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:52.923 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:53.182 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:53.442 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:54.382 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:54.382 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.382 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.382 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.642 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.642 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:54.642 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.642 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.642 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.642 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.642 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.642 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.902 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.902 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.902 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.902 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.162 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.422 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.422 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:55.422 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:55.682 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.682 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.064 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.325 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.325 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.325 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.325 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.584 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.584 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:57.584 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.584 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.844 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.844 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.844 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.844 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.844 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.844 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:58.104 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:58.104 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:58.363 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:58.363 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.740 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.740 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.740 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.741 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.741 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.000 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.000 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.000 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.000 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.260 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.260 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.260 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.260 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.520 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.520 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:00.520 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.520 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.520 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.520 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:00.780 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.780 14:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.039 14:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:01.977 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:01.977 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:01.977 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.977 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.236 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.236 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:02.236 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.236 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.496 14:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.755 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.755 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.755 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.755 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.015 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.015 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.015 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.015 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.273 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.273 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:03.273 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.273 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:03.531 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:04.466 14:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:04.466 14:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:04.466 14:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.467 14:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.725 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.725 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:04.725 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.725 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.985 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.245 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.245 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:05.245 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.245 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.505 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.505 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:05.505 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.505 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.765 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.765 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:05.765 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.765 14:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:06.028 14:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:06.989 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:06.989 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.989 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.989 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.248 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.248 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:07.248 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.248 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.507 14:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.767 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.767 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.767 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.767 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.027 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.027 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:08.027 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.027 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.287 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.287 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3067707 00:24:08.287 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3067707 ']' 00:24:08.287 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3067707 00:24:08.287 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:08.287 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.288 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3067707 00:24:08.288 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:08.288 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:08.288 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3067707' 00:24:08.288 killing process with pid 3067707 00:24:08.288 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3067707 00:24:08.288 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3067707 00:24:08.288 Connection closed with partial response: 00:24:08.288 00:24:08.288 00:24:08.551 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3067707 00:24:08.551 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:08.551 [2024-07-26 14:05:06.535309] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:24:08.551 [2024-07-26 14:05:06.535367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067707 ] 00:24:08.551 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.551 [2024-07-26 14:05:06.585067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.551 [2024-07-26 14:05:06.658962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.551 Running I/O for 90 seconds... 00:24:08.551 [2024-07-26 14:05:20.486204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.551 [2024-07-26 14:05:20.486244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.551 [2024-07-26 14:05:20.486295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.551 [2024-07-26 14:05:20.486304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.551 [2024-07-26 14:05:20.486319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.551 [2024-07-26 14:05:20.486327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.551 [2024-07-26 14:05:20.486340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.551 [2024-07-26 14:05:20.486349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.551 [2024-07-26 14:05:20.486362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.551 [2024-07-26 14:05:20.486369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.551 [2024-07-26 14:05:20.486384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.551 [2024-07-26 14:05:20.486391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.552 [2024-07-26 14:05:20.486908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.486964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.486970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.552 [2024-07-26 14:05:20.487906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.552 [2024-07-26 14:05:20.487920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.487941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.487948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.487963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.487970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.487984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.487992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.553 [2024-07-26 14:05:20.488601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.553 [2024-07-26 14:05:20.488819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.553 [2024-07-26 14:05:20.488826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.488977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.488984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.554 [2024-07-26 14:05:20.489137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.554 [2024-07-26 14:05:20.489162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.554 [2024-07-26 14:05:20.489188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.554 [2024-07-26 14:05:20.489213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.554 [2024-07-26 14:05:20.489238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.554 [2024-07-26 14:05:20.489263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.554 [2024-07-26 14:05:20.489816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.554 [2024-07-26 14:05:20.489834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.489841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.489859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:20.489866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.489884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.489892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.489910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.489917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.489934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.489959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.489967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.489985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.489992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.490010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.490017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.490035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.490046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:20.490064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:20.490071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.358723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.358730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.555 [2024-07-26 14:05:33.359677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.555 [2024-07-26 14:05:33.359721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.555 [2024-07-26 14:05:33.359733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.556 [2024-07-26 14:05:33.359740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.556 [2024-07-26 14:05:33.360423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.556 [2024-07-26 14:05:33.360446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.556 [2024-07-26 14:05:33.360466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.360984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.360996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.556 [2024-07-26 14:05:33.361127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.556 [2024-07-26 14:05:33.361140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.557 [2024-07-26 14:05:33.361146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.557 [2024-07-26 14:05:33.361159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.557 [2024-07-26 14:05:33.361166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.557 [2024-07-26 14:05:33.361179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.557 [2024-07-26 14:05:33.361186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.557 [2024-07-26 14:05:33.361198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.557 [2024-07-26 14:05:33.361205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.557 [2024-07-26 14:05:33.361217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.557 [2024-07-26 14:05:33.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.557 [2024-07-26 14:05:33.361237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.557 [2024-07-26 14:05:33.361244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.557 Received shutdown signal, test time was about 27.151134 seconds 00:24:08.557 00:24:08.557 Latency(us) 00:24:08.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.557 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:08.557 Verification LBA range: start 0x0 length 0x4000 00:24:08.557 Nvme0n1 : 27.15 10454.84 40.84 0.00 0.00 12221.53 452.34 3034487.76 00:24:08.557 =================================================================================================================== 00:24:08.557 Total : 10454.84 40.84 0.00 0.00 12221.53 452.34 3034487.76 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.557 rmmod nvme_tcp 00:24:08.557 rmmod nvme_fabrics 00:24:08.557 rmmod nvme_keyring 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3067417 ']' 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3067417 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3067417 ']' 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3067417 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.557 14:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3067417 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3067417' 00:24:08.817 killing process with pid 3067417 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3067417 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3067417 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.817 14:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.358 00:24:11.358 real 0m39.178s 00:24:11.358 user 1m46.252s 00:24:11.358 sys 0m10.534s 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.358 ************************************ 00:24:11.358 END TEST nvmf_host_multipath_status 00:24:11.358 ************************************ 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.358 ************************************ 00:24:11.358 START TEST nvmf_discovery_remove_ifc 00:24:11.358 ************************************ 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:11.358 * Looking for test storage... 00:24:11.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.358 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.359 14:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:16.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:16.643 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:16.643 Found net devices under 0000:86:00.0: cvl_0_0 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:16.643 Found net devices under 0000:86:00.1: cvl_0_1 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.643 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:24:16.644 00:24:16.644 --- 10.0.0.2 ping statistics --- 00:24:16.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.644 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:24:16.644 00:24:16.644 --- 10.0.0.1 ping statistics --- 00:24:16.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.644 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3076030 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3076030 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3076030 ']' 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.644 14:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.644 [2024-07-26 14:05:43.946676] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:24:16.644 [2024-07-26 14:05:43.946719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.644 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.644 [2024-07-26 14:05:44.002404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.904 [2024-07-26 14:05:44.082054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.904 [2024-07-26 14:05:44.082089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.904 [2024-07-26 14:05:44.082099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.904 [2024-07-26 14:05:44.082105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.904 [2024-07-26 14:05:44.082110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.904 [2024-07-26 14:05:44.082126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 [2024-07-26 14:05:44.785460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.476 [2024-07-26 14:05:44.793574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:17.476 null0 00:24:17.476 [2024-07-26 14:05:44.825599] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3076256 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3076256 /tmp/host.sock 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3076256 ']' 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:17.476 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.476 14:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 [2024-07-26 14:05:44.892246] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:24:17.476 [2024-07-26 14:05:44.892287] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076256 ] 00:24:17.736 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.736 [2024-07-26 14:05:44.945530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.736 [2024-07-26 14:05:45.025123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.307 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.567 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.567 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:18.567 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.567 14:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.506 [2024-07-26 14:05:46.795228] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:19.506 [2024-07-26 14:05:46.795257] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:19.506 [2024-07-26 14:05:46.795273] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.506 [2024-07-26 14:05:46.881534] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:19.766 [2024-07-26 14:05:47.030092] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:19.766 [2024-07-26 14:05:47.030133] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:19.766 [2024-07-26 14:05:47.030153] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:19.766 [2024-07-26 14:05:47.030165] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:19.766 [2024-07-26 14:05:47.030183] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.766 [2024-07-26 14:05:47.035527] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb50e60 was disconnected and freed. delete nvme_qpair. 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.026 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.026 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:20.026 14:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:20.967 14:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:21.906 14:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:23.289 14:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:24.230 14:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.171 [2024-07-26 14:05:52.471524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:25.171 [2024-07-26 14:05:52.471560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.171 [2024-07-26 14:05:52.471571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.171 [2024-07-26 14:05:52.471580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.171 [2024-07-26 14:05:52.471587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.171 [2024-07-26 14:05:52.471595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.171 [2024-07-26 14:05:52.471602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.171 [2024-07-26 14:05:52.471609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.171 [2024-07-26 14:05:52.471619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.171 [2024-07-26 14:05:52.471627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.171 [2024-07-26 14:05:52.471634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.171 [2024-07-26 14:05:52.471640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb176b0 is same with the state(5) to be set 00:24:25.171 [2024-07-26 14:05:52.481545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb176b0 (9): Bad file descriptor 00:24:25.171 [2024-07-26 14:05:52.491583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.171 14:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.155 [2024-07-26 14:05:53.538079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:26.155 [2024-07-26 14:05:53.538118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb176b0 with addr=10.0.0.2, port=4420 00:24:26.155 [2024-07-26 14:05:53.538132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb176b0 is same with the state(5) to be set 00:24:26.155 [2024-07-26 14:05:53.538158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb176b0 (9): Bad file descriptor 00:24:26.155 [2024-07-26 14:05:53.538555] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.155 [2024-07-26 14:05:53.538580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:26.155 [2024-07-26 14:05:53.538590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:26.155 [2024-07-26 14:05:53.538601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:26.155 [2024-07-26 14:05:53.538619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.155 [2024-07-26 14:05:53.538628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:26.155 14:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.536 [2024-07-26 14:05:54.541108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:27.536 [2024-07-26 14:05:54.541129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:27.536 [2024-07-26 14:05:54.541136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:27.536 [2024-07-26 14:05:54.541143] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:27.536 [2024-07-26 14:05:54.541157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.536 [2024-07-26 14:05:54.541174] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:27.536 [2024-07-26 14:05:54.541192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.537 [2024-07-26 14:05:54.541201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.537 [2024-07-26 14:05:54.541209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.537 [2024-07-26 14:05:54.541215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.537 [2024-07-26 14:05:54.541223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.537 [2024-07-26 14:05:54.541229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.537 [2024-07-26 14:05:54.541235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.537 [2024-07-26 14:05:54.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.537 [2024-07-26 14:05:54.541248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.537 [2024-07-26 14:05:54.541254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.537 [2024-07-26 14:05:54.541260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:27.537 [2024-07-26 14:05:54.541590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb16a80 (9): Bad file descriptor 00:24:27.537 [2024-07-26 14:05:54.542601] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:27.537 [2024-07-26 14:05:54.542611] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:27.537 14:05:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:28.477 14:05:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.419 [2024-07-26 14:05:56.559843] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:29.419 [2024-07-26 14:05:56.559859] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:29.419 [2024-07-26 14:05:56.559873] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:29.419 [2024-07-26 14:05:56.649152] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.419 [2024-07-26 14:05:56.833175] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:29.419 [2024-07-26 14:05:56.833209] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:29.419 [2024-07-26 14:05:56.833226] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:29.419 [2024-07-26 14:05:56.833239] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:29.419 [2024-07-26 14:05:56.833245] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:29.419 [2024-07-26 14:05:56.840009] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb1e180 was disconnected and freed. delete nvme_qpair. 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:29.419 14:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3076256 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3076256 ']' 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3076256 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076256 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076256' 00:24:30.800 killing process with pid 3076256 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3076256 00:24:30.800 14:05:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3076256 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.800 rmmod nvme_tcp 00:24:30.800 rmmod nvme_fabrics 00:24:30.800 rmmod nvme_keyring 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3076030 ']' 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3076030 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3076030 ']' 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3076030 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:30.800 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.801 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076030 00:24:30.801 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:30.801 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:30.801 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076030' 00:24:30.801 killing process with pid 3076030 00:24:30.801 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3076030 00:24:30.801 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3076030 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.061 14:05:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.603 00:24:33.603 real 0m22.144s 00:24:33.603 user 0m28.764s 00:24:33.603 sys 0m5.384s 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.603 ************************************ 00:24:33.603 END TEST nvmf_discovery_remove_ifc 00:24:33.603 ************************************ 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.603 ************************************ 00:24:33.603 START TEST nvmf_identify_kernel_target 00:24:33.603 ************************************ 00:24:33.603 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:33.603 * Looking for test storage... 00:24:33.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:33.604 14:06:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.883 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.884 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.884 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:24:38.884 00:24:38.884 --- 10.0.0.2 ping statistics --- 00:24:38.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.884 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:38.884 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:24:38.885 00:24:38.885 --- 10.0.0.1 ping statistics --- 00:24:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.885 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:38.885 14:06:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:41.425 Waiting for block devices as requested 00:24:41.425 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:41.425 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:41.425 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:41.425 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:41.425 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:41.425 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:41.685 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:41.685 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:41.685 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:41.685 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:41.944 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:41.944 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:41.944 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:42.204 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:42.204 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:42.204 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:42.204 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:42.466 No valid GPT data, bailing 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:42.466 00:24:42.466 Discovery Log Number of Records 2, Generation counter 2 00:24:42.466 =====Discovery Log Entry 0====== 00:24:42.466 trtype: tcp 00:24:42.466 adrfam: ipv4 00:24:42.466 subtype: current discovery subsystem 00:24:42.466 treq: not specified, sq flow control disable supported 00:24:42.466 portid: 1 00:24:42.466 trsvcid: 4420 00:24:42.466 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:42.466 traddr: 10.0.0.1 00:24:42.466 eflags: none 00:24:42.466 sectype: none 00:24:42.466 =====Discovery Log Entry 1====== 00:24:42.466 trtype: tcp 00:24:42.466 adrfam: ipv4 00:24:42.466 subtype: nvme subsystem 00:24:42.466 treq: not specified, sq flow control disable supported 00:24:42.466 portid: 1 00:24:42.466 trsvcid: 4420 00:24:42.466 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:42.466 traddr: 10.0.0.1 00:24:42.466 eflags: none 00:24:42.466 sectype: none 00:24:42.466 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:42.466 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:42.466 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.466 ===================================================== 00:24:42.466 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:42.466 ===================================================== 00:24:42.466 Controller Capabilities/Features 00:24:42.466 ================================ 00:24:42.466 Vendor ID: 0000 00:24:42.466 Subsystem Vendor ID: 0000 00:24:42.466 Serial Number: e56fd05199ed6130aed9 00:24:42.466 Model Number: Linux 00:24:42.466 Firmware Version: 6.7.0-68 00:24:42.466 Recommended Arb Burst: 0 00:24:42.466 IEEE OUI Identifier: 00 00 00 00:24:42.466 Multi-path I/O 00:24:42.466 May have multiple subsystem ports: No 00:24:42.466 May have multiple controllers: No 00:24:42.466 Associated with SR-IOV VF: No 00:24:42.466 Max Data Transfer Size: Unlimited 00:24:42.466 Max Number of Namespaces: 0 00:24:42.466 Max Number of I/O Queues: 1024 00:24:42.466 NVMe Specification Version (VS): 1.3 00:24:42.466 NVMe Specification Version (Identify): 1.3 00:24:42.466 Maximum Queue Entries: 1024 00:24:42.466 Contiguous Queues Required: No 00:24:42.466 Arbitration Mechanisms Supported 00:24:42.466 Weighted Round Robin: Not Supported 00:24:42.466 Vendor Specific: Not Supported 00:24:42.466 Reset Timeout: 7500 ms 00:24:42.466 Doorbell Stride: 4 bytes 00:24:42.466 NVM Subsystem Reset: Not Supported 00:24:42.466 Command Sets Supported 00:24:42.466 NVM Command Set: Supported 00:24:42.466 Boot Partition: Not Supported 00:24:42.466 Memory Page Size Minimum: 4096 bytes 00:24:42.466 Memory Page Size Maximum: 4096 bytes 00:24:42.466 Persistent Memory Region: Not Supported 00:24:42.466 Optional Asynchronous Events Supported 00:24:42.466 Namespace Attribute Notices: Not Supported 00:24:42.467 Firmware Activation Notices: Not Supported 00:24:42.467 ANA Change Notices: Not Supported 00:24:42.467 PLE Aggregate Log Change Notices: Not Supported 00:24:42.467 LBA Status Info Alert Notices: Not Supported 00:24:42.467 EGE Aggregate Log Change Notices: Not Supported 00:24:42.467 Normal NVM Subsystem Shutdown event: Not Supported 00:24:42.467 Zone Descriptor Change Notices: Not Supported 00:24:42.467 Discovery Log Change Notices: Supported 00:24:42.467 Controller Attributes 00:24:42.467 128-bit Host Identifier: Not Supported 00:24:42.467 Non-Operational Permissive Mode: Not Supported 00:24:42.467 NVM Sets: Not Supported 00:24:42.467 Read Recovery Levels: Not Supported 00:24:42.467 Endurance Groups: Not Supported 00:24:42.467 Predictable Latency Mode: Not Supported 00:24:42.467 Traffic Based Keep ALive: Not Supported 00:24:42.467 Namespace Granularity: Not Supported 00:24:42.467 SQ Associations: Not Supported 00:24:42.467 UUID List: Not Supported 00:24:42.467 Multi-Domain Subsystem: Not Supported 00:24:42.467 Fixed Capacity Management: Not Supported 00:24:42.467 Variable Capacity Management: Not Supported 00:24:42.467 Delete Endurance Group: Not Supported 00:24:42.467 Delete NVM Set: Not Supported 00:24:42.467 Extended LBA Formats Supported: Not Supported 00:24:42.467 Flexible Data Placement Supported: Not Supported 00:24:42.467 00:24:42.467 Controller Memory Buffer Support 00:24:42.467 ================================ 00:24:42.467 Supported: No 00:24:42.467 00:24:42.467 Persistent Memory Region Support 00:24:42.467 ================================ 00:24:42.467 Supported: No 00:24:42.467 00:24:42.467 Admin Command Set Attributes 00:24:42.467 ============================ 00:24:42.467 Security Send/Receive: Not Supported 00:24:42.467 Format NVM: Not Supported 00:24:42.467 Firmware Activate/Download: Not Supported 00:24:42.467 Namespace Management: Not Supported 00:24:42.467 Device Self-Test: Not Supported 00:24:42.467 Directives: Not Supported 00:24:42.467 NVMe-MI: Not Supported 00:24:42.467 Virtualization Management: Not Supported 00:24:42.467 Doorbell Buffer Config: Not Supported 00:24:42.467 Get LBA Status Capability: Not Supported 00:24:42.467 Command & Feature Lockdown Capability: Not Supported 00:24:42.467 Abort Command Limit: 1 00:24:42.467 Async Event Request Limit: 1 00:24:42.467 Number of Firmware Slots: N/A 00:24:42.467 Firmware Slot 1 Read-Only: N/A 00:24:42.467 Firmware Activation Without Reset: N/A 00:24:42.467 Multiple Update Detection Support: N/A 00:24:42.467 Firmware Update Granularity: No Information Provided 00:24:42.467 Per-Namespace SMART Log: No 00:24:42.467 Asymmetric Namespace Access Log Page: Not Supported 00:24:42.467 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:42.467 Command Effects Log Page: Not Supported 00:24:42.467 Get Log Page Extended Data: Supported 00:24:42.467 Telemetry Log Pages: Not Supported 00:24:42.467 Persistent Event Log Pages: Not Supported 00:24:42.467 Supported Log Pages Log Page: May Support 00:24:42.467 Commands Supported & Effects Log Page: Not Supported 00:24:42.467 Feature Identifiers & Effects Log Page:May Support 00:24:42.467 NVMe-MI Commands & Effects Log Page: May Support 00:24:42.467 Data Area 4 for Telemetry Log: Not Supported 00:24:42.467 Error Log Page Entries Supported: 1 00:24:42.467 Keep Alive: Not Supported 00:24:42.467 00:24:42.467 NVM Command Set Attributes 00:24:42.467 ========================== 00:24:42.467 Submission Queue Entry Size 00:24:42.467 Max: 1 00:24:42.467 Min: 1 00:24:42.467 Completion Queue Entry Size 00:24:42.467 Max: 1 00:24:42.467 Min: 1 00:24:42.467 Number of Namespaces: 0 00:24:42.467 Compare Command: Not Supported 00:24:42.467 Write Uncorrectable Command: Not Supported 00:24:42.467 Dataset Management Command: Not Supported 00:24:42.467 Write Zeroes Command: Not Supported 00:24:42.467 Set Features Save Field: Not Supported 00:24:42.467 Reservations: Not Supported 00:24:42.467 Timestamp: Not Supported 00:24:42.467 Copy: Not Supported 00:24:42.467 Volatile Write Cache: Not Present 00:24:42.467 Atomic Write Unit (Normal): 1 00:24:42.467 Atomic Write Unit (PFail): 1 00:24:42.467 Atomic Compare & Write Unit: 1 00:24:42.467 Fused Compare & Write: Not Supported 00:24:42.467 Scatter-Gather List 00:24:42.467 SGL Command Set: Supported 00:24:42.467 SGL Keyed: Not Supported 00:24:42.467 SGL Bit Bucket Descriptor: Not Supported 00:24:42.467 SGL Metadata Pointer: Not Supported 00:24:42.467 Oversized SGL: Not Supported 00:24:42.467 SGL Metadata Address: Not Supported 00:24:42.467 SGL Offset: Supported 00:24:42.467 Transport SGL Data Block: Not Supported 00:24:42.467 Replay Protected Memory Block: Not Supported 00:24:42.467 00:24:42.467 Firmware Slot Information 00:24:42.467 ========================= 00:24:42.467 Active slot: 0 00:24:42.467 00:24:42.467 00:24:42.467 Error Log 00:24:42.467 ========= 00:24:42.467 00:24:42.467 Active Namespaces 00:24:42.467 ================= 00:24:42.467 Discovery Log Page 00:24:42.467 ================== 00:24:42.467 Generation Counter: 2 00:24:42.467 Number of Records: 2 00:24:42.467 Record Format: 0 00:24:42.467 00:24:42.467 Discovery Log Entry 0 00:24:42.467 ---------------------- 00:24:42.467 Transport Type: 3 (TCP) 00:24:42.467 Address Family: 1 (IPv4) 00:24:42.467 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:42.467 Entry Flags: 00:24:42.467 Duplicate Returned Information: 0 00:24:42.467 Explicit Persistent Connection Support for Discovery: 0 00:24:42.467 Transport Requirements: 00:24:42.467 Secure Channel: Not Specified 00:24:42.467 Port ID: 1 (0x0001) 00:24:42.467 Controller ID: 65535 (0xffff) 00:24:42.467 Admin Max SQ Size: 32 00:24:42.467 Transport Service Identifier: 4420 00:24:42.467 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:42.467 Transport Address: 10.0.0.1 00:24:42.467 Discovery Log Entry 1 00:24:42.467 ---------------------- 00:24:42.467 Transport Type: 3 (TCP) 00:24:42.467 Address Family: 1 (IPv4) 00:24:42.467 Subsystem Type: 2 (NVM Subsystem) 00:24:42.467 Entry Flags: 00:24:42.467 Duplicate Returned Information: 0 00:24:42.467 Explicit Persistent Connection Support for Discovery: 0 00:24:42.467 Transport Requirements: 00:24:42.467 Secure Channel: Not Specified 00:24:42.467 Port ID: 1 (0x0001) 00:24:42.467 Controller ID: 65535 (0xffff) 00:24:42.467 Admin Max SQ Size: 32 00:24:42.467 Transport Service Identifier: 4420 00:24:42.467 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:42.467 Transport Address: 10.0.0.1 00:24:42.467 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:42.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.467 get_feature(0x01) failed 00:24:42.467 get_feature(0x02) failed 00:24:42.467 get_feature(0x04) failed 00:24:42.467 ===================================================== 00:24:42.468 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:42.468 ===================================================== 00:24:42.468 Controller Capabilities/Features 00:24:42.468 ================================ 00:24:42.468 Vendor ID: 0000 00:24:42.468 Subsystem Vendor ID: 0000 00:24:42.468 Serial Number: 67143b7a3804dc62f6ad 00:24:42.468 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:42.468 Firmware Version: 6.7.0-68 00:24:42.468 Recommended Arb Burst: 6 00:24:42.468 IEEE OUI Identifier: 00 00 00 00:24:42.468 Multi-path I/O 00:24:42.468 May have multiple subsystem ports: Yes 00:24:42.468 May have multiple controllers: Yes 00:24:42.468 Associated with SR-IOV VF: No 00:24:42.468 Max Data Transfer Size: Unlimited 00:24:42.468 Max Number of Namespaces: 1024 00:24:42.468 Max Number of I/O Queues: 128 00:24:42.468 NVMe Specification Version (VS): 1.3 00:24:42.468 NVMe Specification Version (Identify): 1.3 00:24:42.468 Maximum Queue Entries: 1024 00:24:42.468 Contiguous Queues Required: No 00:24:42.468 Arbitration Mechanisms Supported 00:24:42.468 Weighted Round Robin: Not Supported 00:24:42.468 Vendor Specific: Not Supported 00:24:42.468 Reset Timeout: 7500 ms 00:24:42.468 Doorbell Stride: 4 bytes 00:24:42.468 NVM Subsystem Reset: Not Supported 00:24:42.468 Command Sets Supported 00:24:42.468 NVM Command Set: Supported 00:24:42.468 Boot Partition: Not Supported 00:24:42.468 Memory Page Size Minimum: 4096 bytes 00:24:42.468 Memory Page Size Maximum: 4096 bytes 00:24:42.468 Persistent Memory Region: Not Supported 00:24:42.468 Optional Asynchronous Events Supported 00:24:42.468 Namespace Attribute Notices: Supported 00:24:42.468 Firmware Activation Notices: Not Supported 00:24:42.468 ANA Change Notices: Supported 00:24:42.468 PLE Aggregate Log Change Notices: Not Supported 00:24:42.468 LBA Status Info Alert Notices: Not Supported 00:24:42.468 EGE Aggregate Log Change Notices: Not Supported 00:24:42.468 Normal NVM Subsystem Shutdown event: Not Supported 00:24:42.468 Zone Descriptor Change Notices: Not Supported 00:24:42.468 Discovery Log Change Notices: Not Supported 00:24:42.468 Controller Attributes 00:24:42.468 128-bit Host Identifier: Supported 00:24:42.468 Non-Operational Permissive Mode: Not Supported 00:24:42.468 NVM Sets: Not Supported 00:24:42.468 Read Recovery Levels: Not Supported 00:24:42.468 Endurance Groups: Not Supported 00:24:42.468 Predictable Latency Mode: Not Supported 00:24:42.468 Traffic Based Keep ALive: Supported 00:24:42.468 Namespace Granularity: Not Supported 00:24:42.468 SQ Associations: Not Supported 00:24:42.468 UUID List: Not Supported 00:24:42.468 Multi-Domain Subsystem: Not Supported 00:24:42.468 Fixed Capacity Management: Not Supported 00:24:42.468 Variable Capacity Management: Not Supported 00:24:42.468 Delete Endurance Group: Not Supported 00:24:42.468 Delete NVM Set: Not Supported 00:24:42.468 Extended LBA Formats Supported: Not Supported 00:24:42.468 Flexible Data Placement Supported: Not Supported 00:24:42.468 00:24:42.468 Controller Memory Buffer Support 00:24:42.468 ================================ 00:24:42.468 Supported: No 00:24:42.468 00:24:42.468 Persistent Memory Region Support 00:24:42.468 ================================ 00:24:42.468 Supported: No 00:24:42.468 00:24:42.468 Admin Command Set Attributes 00:24:42.468 ============================ 00:24:42.468 Security Send/Receive: Not Supported 00:24:42.468 Format NVM: Not Supported 00:24:42.468 Firmware Activate/Download: Not Supported 00:24:42.468 Namespace Management: Not Supported 00:24:42.468 Device Self-Test: Not Supported 00:24:42.468 Directives: Not Supported 00:24:42.468 NVMe-MI: Not Supported 00:24:42.468 Virtualization Management: Not Supported 00:24:42.468 Doorbell Buffer Config: Not Supported 00:24:42.468 Get LBA Status Capability: Not Supported 00:24:42.468 Command & Feature Lockdown Capability: Not Supported 00:24:42.468 Abort Command Limit: 4 00:24:42.468 Async Event Request Limit: 4 00:24:42.468 Number of Firmware Slots: N/A 00:24:42.468 Firmware Slot 1 Read-Only: N/A 00:24:42.468 Firmware Activation Without Reset: N/A 00:24:42.468 Multiple Update Detection Support: N/A 00:24:42.468 Firmware Update Granularity: No Information Provided 00:24:42.468 Per-Namespace SMART Log: Yes 00:24:42.468 Asymmetric Namespace Access Log Page: Supported 00:24:42.468 ANA Transition Time : 10 sec 00:24:42.468 00:24:42.468 Asymmetric Namespace Access Capabilities 00:24:42.468 ANA Optimized State : Supported 00:24:42.468 ANA Non-Optimized State : Supported 00:24:42.468 ANA Inaccessible State : Supported 00:24:42.468 ANA Persistent Loss State : Supported 00:24:42.468 ANA Change State : Supported 00:24:42.468 ANAGRPID is not changed : No 00:24:42.468 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:42.468 00:24:42.468 ANA Group Identifier Maximum : 128 00:24:42.468 Number of ANA Group Identifiers : 128 00:24:42.468 Max Number of Allowed Namespaces : 1024 00:24:42.468 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:42.468 Command Effects Log Page: Supported 00:24:42.468 Get Log Page Extended Data: Supported 00:24:42.468 Telemetry Log Pages: Not Supported 00:24:42.468 Persistent Event Log Pages: Not Supported 00:24:42.468 Supported Log Pages Log Page: May Support 00:24:42.468 Commands Supported & Effects Log Page: Not Supported 00:24:42.468 Feature Identifiers & Effects Log Page:May Support 00:24:42.468 NVMe-MI Commands & Effects Log Page: May Support 00:24:42.468 Data Area 4 for Telemetry Log: Not Supported 00:24:42.468 Error Log Page Entries Supported: 128 00:24:42.468 Keep Alive: Supported 00:24:42.468 Keep Alive Granularity: 1000 ms 00:24:42.468 00:24:42.468 NVM Command Set Attributes 00:24:42.468 ========================== 00:24:42.468 Submission Queue Entry Size 00:24:42.468 Max: 64 00:24:42.468 Min: 64 00:24:42.468 Completion Queue Entry Size 00:24:42.468 Max: 16 00:24:42.468 Min: 16 00:24:42.468 Number of Namespaces: 1024 00:24:42.468 Compare Command: Not Supported 00:24:42.468 Write Uncorrectable Command: Not Supported 00:24:42.468 Dataset Management Command: Supported 00:24:42.468 Write Zeroes Command: Supported 00:24:42.468 Set Features Save Field: Not Supported 00:24:42.468 Reservations: Not Supported 00:24:42.468 Timestamp: Not Supported 00:24:42.468 Copy: Not Supported 00:24:42.468 Volatile Write Cache: Present 00:24:42.468 Atomic Write Unit (Normal): 1 00:24:42.468 Atomic Write Unit (PFail): 1 00:24:42.468 Atomic Compare & Write Unit: 1 00:24:42.468 Fused Compare & Write: Not Supported 00:24:42.468 Scatter-Gather List 00:24:42.468 SGL Command Set: Supported 00:24:42.468 SGL Keyed: Not Supported 00:24:42.468 SGL Bit Bucket Descriptor: Not Supported 00:24:42.468 SGL Metadata Pointer: Not Supported 00:24:42.468 Oversized SGL: Not Supported 00:24:42.468 SGL Metadata Address: Not Supported 00:24:42.468 SGL Offset: Supported 00:24:42.468 Transport SGL Data Block: Not Supported 00:24:42.468 Replay Protected Memory Block: Not Supported 00:24:42.468 00:24:42.468 Firmware Slot Information 00:24:42.468 ========================= 00:24:42.468 Active slot: 0 00:24:42.468 00:24:42.468 Asymmetric Namespace Access 00:24:42.468 =========================== 00:24:42.468 Change Count : 0 00:24:42.468 Number of ANA Group Descriptors : 1 00:24:42.468 ANA Group Descriptor : 0 00:24:42.468 ANA Group ID : 1 00:24:42.468 Number of NSID Values : 1 00:24:42.468 Change Count : 0 00:24:42.468 ANA State : 1 00:24:42.468 Namespace Identifier : 1 00:24:42.468 00:24:42.468 Commands Supported and Effects 00:24:42.468 ============================== 00:24:42.468 Admin Commands 00:24:42.468 -------------- 00:24:42.468 Get Log Page (02h): Supported 00:24:42.468 Identify (06h): Supported 00:24:42.468 Abort (08h): Supported 00:24:42.468 Set Features (09h): Supported 00:24:42.468 Get Features (0Ah): Supported 00:24:42.468 Asynchronous Event Request (0Ch): Supported 00:24:42.468 Keep Alive (18h): Supported 00:24:42.468 I/O Commands 00:24:42.468 ------------ 00:24:42.468 Flush (00h): Supported 00:24:42.468 Write (01h): Supported LBA-Change 00:24:42.468 Read (02h): Supported 00:24:42.468 Write Zeroes (08h): Supported LBA-Change 00:24:42.468 Dataset Management (09h): Supported 00:24:42.468 00:24:42.468 Error Log 00:24:42.468 ========= 00:24:42.468 Entry: 0 00:24:42.468 Error Count: 0x3 00:24:42.468 Submission Queue Id: 0x0 00:24:42.468 Command Id: 0x5 00:24:42.468 Phase Bit: 0 00:24:42.468 Status Code: 0x2 00:24:42.468 Status Code Type: 0x0 00:24:42.468 Do Not Retry: 1 00:24:42.469 Error Location: 0x28 00:24:42.469 LBA: 0x0 00:24:42.469 Namespace: 0x0 00:24:42.469 Vendor Log Page: 0x0 00:24:42.469 ----------- 00:24:42.469 Entry: 1 00:24:42.469 Error Count: 0x2 00:24:42.469 Submission Queue Id: 0x0 00:24:42.469 Command Id: 0x5 00:24:42.469 Phase Bit: 0 00:24:42.469 Status Code: 0x2 00:24:42.469 Status Code Type: 0x0 00:24:42.469 Do Not Retry: 1 00:24:42.469 Error Location: 0x28 00:24:42.469 LBA: 0x0 00:24:42.469 Namespace: 0x0 00:24:42.469 Vendor Log Page: 0x0 00:24:42.469 ----------- 00:24:42.469 Entry: 2 00:24:42.469 Error Count: 0x1 00:24:42.469 Submission Queue Id: 0x0 00:24:42.469 Command Id: 0x4 00:24:42.469 Phase Bit: 0 00:24:42.469 Status Code: 0x2 00:24:42.469 Status Code Type: 0x0 00:24:42.469 Do Not Retry: 1 00:24:42.469 Error Location: 0x28 00:24:42.469 LBA: 0x0 00:24:42.469 Namespace: 0x0 00:24:42.469 Vendor Log Page: 0x0 00:24:42.469 00:24:42.469 Number of Queues 00:24:42.469 ================ 00:24:42.469 Number of I/O Submission Queues: 128 00:24:42.469 Number of I/O Completion Queues: 128 00:24:42.469 00:24:42.469 ZNS Specific Controller Data 00:24:42.469 ============================ 00:24:42.469 Zone Append Size Limit: 0 00:24:42.469 00:24:42.469 00:24:42.469 Active Namespaces 00:24:42.469 ================= 00:24:42.469 get_feature(0x05) failed 00:24:42.469 Namespace ID:1 00:24:42.469 Command Set Identifier: NVM (00h) 00:24:42.469 Deallocate: Supported 00:24:42.469 Deallocated/Unwritten Error: Not Supported 00:24:42.469 Deallocated Read Value: Unknown 00:24:42.469 Deallocate in Write Zeroes: Not Supported 00:24:42.469 Deallocated Guard Field: 0xFFFF 00:24:42.469 Flush: Supported 00:24:42.469 Reservation: Not Supported 00:24:42.469 Namespace Sharing Capabilities: Multiple Controllers 00:24:42.469 Size (in LBAs): 1953525168 (931GiB) 00:24:42.469 Capacity (in LBAs): 1953525168 (931GiB) 00:24:42.469 Utilization (in LBAs): 1953525168 (931GiB) 00:24:42.469 UUID: 1ca9743e-8669-4cbc-8a2b-df2843f39d12 00:24:42.469 Thin Provisioning: Not Supported 00:24:42.469 Per-NS Atomic Units: Yes 00:24:42.469 Atomic Boundary Size (Normal): 0 00:24:42.469 Atomic Boundary Size (PFail): 0 00:24:42.469 Atomic Boundary Offset: 0 00:24:42.469 NGUID/EUI64 Never Reused: No 00:24:42.469 ANA group ID: 1 00:24:42.469 Namespace Write Protected: No 00:24:42.469 Number of LBA Formats: 1 00:24:42.469 Current LBA Format: LBA Format #00 00:24:42.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:42.469 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.469 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.730 rmmod nvme_tcp 00:24:42.730 rmmod nvme_fabrics 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.730 14:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.641 14:06:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:44.641 14:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:47.215 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:47.215 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:48.156 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:48.156 00:24:48.156 real 0m15.031s 00:24:48.156 user 0m3.585s 00:24:48.156 sys 0m7.801s 00:24:48.156 14:06:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.156 14:06:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.156 ************************************ 00:24:48.156 END TEST nvmf_identify_kernel_target 00:24:48.156 ************************************ 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.417 ************************************ 00:24:48.417 START TEST nvmf_auth_host 00:24:48.417 ************************************ 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:48.417 * Looking for test storage... 00:24:48.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.417 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.418 14:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:55.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.000 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:55.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:55.001 Found net devices under 0000:86:00.0: cvl_0_0 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:55.001 Found net devices under 0000:86:00.1: cvl_0_1 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:24:55.001 00:24:55.001 --- 10.0.0.2 ping statistics --- 00:24:55.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.001 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:24:55.001 00:24:55.001 --- 10.0.0.1 ping statistics --- 00:24:55.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.001 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3088761 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3088761 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3088761 ']' 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.001 14:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:55.001 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3ba4a2dcaae7bf3f24c42fd57264cbf 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jfO 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3ba4a2dcaae7bf3f24c42fd57264cbf 0 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3ba4a2dcaae7bf3f24c42fd57264cbf 0 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3ba4a2dcaae7bf3f24c42fd57264cbf 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:55.002 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jfO 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jfO 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.jfO 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e88df07746b5baf6a94923901c0b601db6eee6c450ad47d63ca057d1506a924e 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Jpe 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e88df07746b5baf6a94923901c0b601db6eee6c450ad47d63ca057d1506a924e 3 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e88df07746b5baf6a94923901c0b601db6eee6c450ad47d63ca057d1506a924e 3 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e88df07746b5baf6a94923901c0b601db6eee6c450ad47d63ca057d1506a924e 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Jpe 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Jpe 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Jpe 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecb85b0a43890f9ab9fc98225346204a92bbc725b8099d0b 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lXL 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecb85b0a43890f9ab9fc98225346204a92bbc725b8099d0b 0 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecb85b0a43890f9ab9fc98225346204a92bbc725b8099d0b 0 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecb85b0a43890f9ab9fc98225346204a92bbc725b8099d0b 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lXL 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lXL 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.lXL 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9bbc57582e051b617bf1c2668f536288a98d82f960bb01c8 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SyQ 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9bbc57582e051b617bf1c2668f536288a98d82f960bb01c8 2 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9bbc57582e051b617bf1c2668f536288a98d82f960bb01c8 2 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9bbc57582e051b617bf1c2668f536288a98d82f960bb01c8 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SyQ 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SyQ 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.SyQ 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=423734b3c0b61be0b290a364b11c1155 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ua0 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 423734b3c0b61be0b290a364b11c1155 1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 423734b3c0b61be0b290a364b11c1155 1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=423734b3c0b61be0b290a364b11c1155 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:55.265 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ua0 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ua0 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ua0 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=55f5ead066734bb08a0f60d080f77d7e 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.16g 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 55f5ead066734bb08a0f60d080f77d7e 1 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 55f5ead066734bb08a0f60d080f77d7e 1 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=55f5ead066734bb08a0f60d080f77d7e 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.16g 00:24:55.526 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.16g 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.16g 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7e3c89d9d32f6d13f7d0ce0ca9f12a434dfbec186d066949 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xLd 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7e3c89d9d32f6d13f7d0ce0ca9f12a434dfbec186d066949 2 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7e3c89d9d32f6d13f7d0ce0ca9f12a434dfbec186d066949 2 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7e3c89d9d32f6d13f7d0ce0ca9f12a434dfbec186d066949 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xLd 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xLd 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xLd 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9f2eb4e39b885e96ec0accc29b0441b5 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2xT 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9f2eb4e39b885e96ec0accc29b0441b5 0 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9f2eb4e39b885e96ec0accc29b0441b5 0 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9f2eb4e39b885e96ec0accc29b0441b5 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2xT 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2xT 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2xT 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=861edad97728878ed6005cef60d9ad112a0d1696bda5ac975e400e877b3f11fa 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XVW 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 861edad97728878ed6005cef60d9ad112a0d1696bda5ac975e400e877b3f11fa 3 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 861edad97728878ed6005cef60d9ad112a0d1696bda5ac975e400e877b3f11fa 3 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=861edad97728878ed6005cef60d9ad112a0d1696bda5ac975e400e877b3f11fa 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XVW 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XVW 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.XVW 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3088761 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3088761 ']' 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.527 14:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jfO 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Jpe ]] 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jpe 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lXL 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.SyQ ]] 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SyQ 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ua0 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.787 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.16g ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.16g 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xLd 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2xT ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2xT 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.XVW 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:55.788 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:56.047 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:56.047 14:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:58.587 Waiting for block devices as requested 00:24:58.587 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:58.587 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:58.847 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:58.847 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:58.847 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:58.847 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:59.107 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:59.107 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:59.107 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:59.107 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.366 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:59.366 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:59.366 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:59.366 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:59.628 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:59.628 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:59.628 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:00.198 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:00.458 No valid GPT data, bailing 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:00.458 00:25:00.458 Discovery Log Number of Records 2, Generation counter 2 00:25:00.458 =====Discovery Log Entry 0====== 00:25:00.458 trtype: tcp 00:25:00.458 adrfam: ipv4 00:25:00.458 subtype: current discovery subsystem 00:25:00.458 treq: not specified, sq flow control disable supported 00:25:00.458 portid: 1 00:25:00.458 trsvcid: 4420 00:25:00.458 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:00.458 traddr: 10.0.0.1 00:25:00.458 eflags: none 00:25:00.458 sectype: none 00:25:00.458 =====Discovery Log Entry 1====== 00:25:00.458 trtype: tcp 00:25:00.458 adrfam: ipv4 00:25:00.458 subtype: nvme subsystem 00:25:00.458 treq: not specified, sq flow control disable supported 00:25:00.458 portid: 1 00:25:00.458 trsvcid: 4420 00:25:00.458 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:00.458 traddr: 10.0.0.1 00:25:00.458 eflags: none 00:25:00.458 sectype: none 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:00.458 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.459 nvme0n1 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.459 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.720 14:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.720 nvme0n1 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:00.720 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.721 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.981 nvme0n1 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.981 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.982 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 nvme0n1 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 nvme0n1 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.242 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 nvme0n1 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.503 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.504 14:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.764 nvme0n1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.764 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.025 nvme0n1 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.025 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.026 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.026 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.026 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.026 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.026 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.286 nvme0n1 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.286 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.287 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.547 nvme0n1 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.547 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.548 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.808 nvme0n1 00:25:02.808 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.808 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.808 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.808 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.808 14:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.808 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.068 nvme0n1 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.068 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.069 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.329 nvme0n1 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.329 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.330 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.590 nvme0n1 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:03.590 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.591 14:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.591 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.850 nvme0n1 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.850 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.110 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.111 nvme0n1 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.111 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.371 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.631 nvme0n1 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.631 14:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.631 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 nvme0n1 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.200 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.460 nvme0n1 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.460 14:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.050 nvme0n1 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.050 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.309 nvme0n1 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:06.309 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.310 14:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.879 nvme0n1 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:06.879 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.880 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.447 nvme0n1 00:25:07.447 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.447 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.447 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.447 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.447 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.447 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.706 14:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.276 nvme0n1 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:08.276 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.277 14:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.846 nvme0n1 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.846 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 nvme0n1 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.416 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.676 nvme0n1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.676 14:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.937 nvme0n1 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.937 nvme0n1 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.937 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.938 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.198 nvme0n1 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.198 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.199 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.457 nvme0n1 00:25:10.457 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.457 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.458 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 nvme0n1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.717 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.718 14:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.718 nvme0n1 00:25:10.718 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.718 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.718 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.718 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.718 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.718 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.976 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.977 nvme0n1 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.977 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 nvme0n1 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 nvme0n1 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 14:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.755 nvme0n1 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:11.755 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.756 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.015 nvme0n1 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.015 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.275 nvme0n1 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.275 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 nvme0n1 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.794 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.795 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.795 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.795 14:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.795 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.054 nvme0n1 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.054 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.055 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.314 nvme0n1 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.314 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.315 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.315 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.315 14:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.883 nvme0n1 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.883 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 nvme0n1 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.142 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.143 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.143 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.143 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.143 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.143 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.713 nvme0n1 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.713 14:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.713 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.714 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.714 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.714 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.973 nvme0n1 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.973 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.232 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.803 nvme0n1 00:25:15.803 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.803 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.803 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.803 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.803 14:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:15.803 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.804 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.373 nvme0n1 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.373 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.374 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.374 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.374 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.374 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.374 14:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.944 nvme0n1 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.944 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.945 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.515 nvme0n1 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.515 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.084 nvme0n1 00:25:18.084 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.084 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.084 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.084 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.084 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.084 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.345 nvme0n1 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.345 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.346 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.606 nvme0n1 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.606 14:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 nvme0n1 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 nvme0n1 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.127 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.127 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 nvme0n1 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.128 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 nvme0n1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.389 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.650 nvme0n1 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.650 14:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.949 nvme0n1 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:19.949 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 nvme0n1 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.210 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.210 nvme0n1 00:25:20.211 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.211 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.211 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.211 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.211 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.211 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.471 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 nvme0n1 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.731 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.732 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.732 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.732 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.732 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.732 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.732 14:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.992 nvme0n1 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.992 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.254 nvme0n1 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.254 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.515 nvme0n1 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.515 14:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.775 nvme0n1 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:21.775 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.776 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.346 nvme0n1 00:25:22.346 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.347 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.608 nvme0n1 00:25:22.608 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.608 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.608 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.608 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.608 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.608 14:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.608 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.178 nvme0n1 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:23.178 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.179 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.439 nvme0n1 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.439 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.699 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.700 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.960 nvme0n1 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNiYTRhMmRjYWFlN2JmM2YyNGM0MmZkNTcyNjRjYmbGaLaQ: 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg4ZGYwNzc0NmI1YmFmNmE5NDkyMzkwMWMwYjYwMWRiNmVlZTZjNDUwYWQ0N2Q2M2NhMDU3ZDE1MDZhOTI0ZT2cZU8=: 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.960 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.531 nvme0n1 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:24.531 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.789 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.790 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.359 nvme0n1 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.359 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDIzNzM0YjNjMGI2MWJlMGIyOTBhMzY0YjExYzExNTVEkO9E: 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: ]] 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTVmNWVhZDA2NjczNGJiMDhhMGY2MGQwODBmNzdkN2UDWOx6: 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.360 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.941 nvme0n1 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2UzYzg5ZDlkMzJmNmQxM2Y3ZDBjZTBjYTlmMTJhNDM0ZGZiZWMxODZkMDY2OTQ5zZxDDA==: 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWYyZWI0ZTM5Yjg4NWU5NmVjMGFjY2MyOWIwNDQxYjUOKKF+: 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.941 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.516 nvme0n1 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODYxZWRhZDk3NzI4ODc4ZWQ2MDA1Y2VmNjBkOWFkMTEyYTBkMTY5NmJkYTVhYzk3NWU0MDBlODc3YjNmMTFmYehgsKY=: 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:26.516 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.517 14:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.086 nvme0n1 00:25:27.086 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.086 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.086 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.086 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.086 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.086 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiODViMGE0Mzg5MGY5YWI5ZmM5ODIyNTM0NjIwNGE5MmJiYzcyNWI4MDk5ZDBi5GURKA==: 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWJiYzU3NTgyZTA1MWI2MTdiZjFjMjY2OGY1MzYyODhhOThkODJmOTYwYmIwMWM4IZxQVQ==: 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.087 request: 00:25:27.087 { 00:25:27.087 "name": "nvme0", 00:25:27.087 "trtype": "tcp", 00:25:27.087 "traddr": "10.0.0.1", 00:25:27.087 "adrfam": "ipv4", 00:25:27.087 "trsvcid": "4420", 00:25:27.087 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:27.087 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:27.087 "prchk_reftag": false, 00:25:27.087 "prchk_guard": false, 00:25:27.087 "hdgst": false, 00:25:27.087 "ddgst": false, 00:25:27.087 "method": "bdev_nvme_attach_controller", 00:25:27.087 "req_id": 1 00:25:27.087 } 00:25:27.087 Got JSON-RPC error response 00:25:27.087 response: 00:25:27.087 { 00:25:27.087 "code": -5, 00:25:27.087 "message": "Input/output error" 00:25:27.087 } 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.087 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.349 request: 00:25:27.349 { 00:25:27.349 "name": "nvme0", 00:25:27.349 "trtype": "tcp", 00:25:27.349 "traddr": "10.0.0.1", 00:25:27.349 "adrfam": "ipv4", 00:25:27.349 "trsvcid": "4420", 00:25:27.349 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:27.349 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:27.349 "prchk_reftag": false, 00:25:27.349 "prchk_guard": false, 00:25:27.349 "hdgst": false, 00:25:27.349 "ddgst": false, 00:25:27.349 "dhchap_key": "key2", 00:25:27.349 "method": "bdev_nvme_attach_controller", 00:25:27.349 "req_id": 1 00:25:27.349 } 00:25:27.349 Got JSON-RPC error response 00:25:27.349 response: 00:25:27.349 { 00:25:27.349 "code": -5, 00:25:27.349 "message": "Input/output error" 00:25:27.349 } 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.349 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.350 request: 00:25:27.350 { 00:25:27.350 "name": "nvme0", 00:25:27.350 "trtype": "tcp", 00:25:27.350 "traddr": "10.0.0.1", 00:25:27.350 "adrfam": "ipv4", 00:25:27.350 "trsvcid": "4420", 00:25:27.350 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:27.350 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:27.350 "prchk_reftag": false, 00:25:27.350 "prchk_guard": false, 00:25:27.350 "hdgst": false, 00:25:27.350 "ddgst": false, 00:25:27.350 "dhchap_key": "key1", 00:25:27.350 "dhchap_ctrlr_key": "ckey2", 00:25:27.350 "method": "bdev_nvme_attach_controller", 00:25:27.350 "req_id": 1 00:25:27.350 } 00:25:27.350 Got JSON-RPC error response 00:25:27.350 response: 00:25:27.350 { 00:25:27.350 "code": -5, 00:25:27.350 "message": "Input/output error" 00:25:27.350 } 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.350 rmmod nvme_tcp 00:25:27.350 rmmod nvme_fabrics 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3088761 ']' 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3088761 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3088761 ']' 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3088761 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3088761 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3088761' 00:25:27.350 killing process with pid 3088761 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3088761 00:25:27.350 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3088761 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.611 14:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.156 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:30.156 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:32.698 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:32.698 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:33.270 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:33.270 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jfO /tmp/spdk.key-null.lXL /tmp/spdk.key-sha256.Ua0 /tmp/spdk.key-sha384.xLd /tmp/spdk.key-sha512.XVW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:33.270 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:35.814 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:35.814 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:35.814 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:36.074 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:36.074 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:36.074 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:36.074 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:36.074 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:36.074 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:36.074 00:25:36.074 real 0m47.700s 00:25:36.074 user 0m42.110s 00:25:36.074 sys 0m11.723s 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.074 ************************************ 00:25:36.074 END TEST nvmf_auth_host 00:25:36.074 ************************************ 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.074 ************************************ 00:25:36.074 START TEST nvmf_digest 00:25:36.074 ************************************ 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:36.074 * Looking for test storage... 00:25:36.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.074 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:36.335 14:07:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.663 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:41.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:41.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:41.664 Found net devices under 0000:86:00.0: cvl_0_0 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:41.664 Found net devices under 0000:86:00.1: cvl_0_1 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.664 14:07:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.664 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.664 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.664 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.664 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:25:41.925 00:25:41.925 --- 10.0.0.2 ping statistics --- 00:25:41.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.925 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:25:41.925 00:25:41.925 --- 10.0.0.1 ping statistics --- 00:25:41.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.925 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.925 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:41.926 ************************************ 00:25:41.926 START TEST nvmf_digest_clean 00:25:41.926 ************************************ 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3101563 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3101563 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3101563 ']' 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:41.926 14:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.926 [2024-07-26 14:07:09.267226] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:41.926 [2024-07-26 14:07:09.267269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.926 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.926 [2024-07-26 14:07:09.324230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.187 [2024-07-26 14:07:09.404228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.187 [2024-07-26 14:07:09.404265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.187 [2024-07-26 14:07:09.404272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.187 [2024-07-26 14:07:09.404278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.187 [2024-07-26 14:07:09.404283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.187 [2024-07-26 14:07:09.404299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.757 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.757 null0 00:25:42.757 [2024-07-26 14:07:10.187540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.017 [2024-07-26 14:07:10.211711] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3101812 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3101812 /var/tmp/bperf.sock 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3101812 ']' 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.017 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.018 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.018 14:07:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:43.018 [2024-07-26 14:07:10.265084] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:43.018 [2024-07-26 14:07:10.265127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101812 ] 00:25:43.018 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.018 [2024-07-26 14:07:10.320686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.018 [2024-07-26 14:07:10.402017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.959 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.218 nvme0n1 00:25:44.218 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:44.218 14:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:44.218 Running I/O for 2 seconds... 00:25:46.757 00:25:46.757 Latency(us) 00:25:46.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.757 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:46.757 nvme0n1 : 2.00 26086.15 101.90 0.00 0.00 4900.75 2393.49 23365.01 00:25:46.757 =================================================================================================================== 00:25:46.757 Total : 26086.15 101.90 0.00 0.00 4900.75 2393.49 23365.01 00:25:46.757 0 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:46.757 | select(.opcode=="crc32c") 00:25:46.757 | "\(.module_name) \(.executed)"' 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:46.757 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3101812 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3101812 ']' 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3101812 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3101812 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3101812' 00:25:46.758 killing process with pid 3101812 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3101812 00:25:46.758 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.758 00:25:46.758 Latency(us) 00:25:46.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.758 =================================================================================================================== 00:25:46.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.758 14:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3101812 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3102405 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3102405 /var/tmp/bperf.sock 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3102405 ']' 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.758 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.758 [2024-07-26 14:07:14.145399] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:46.758 [2024-07-26 14:07:14.145454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102405 ] 00:25:46.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.758 Zero copy mechanism will not be used. 00:25:46.758 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.017 [2024-07-26 14:07:14.201762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.017 [2024-07-26 14:07:14.274830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.585 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.585 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:47.585 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:47.585 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:47.585 14:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:47.844 14:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.845 14:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.103 nvme0n1 00:25:48.103 14:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:48.103 14:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:48.103 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:48.103 Zero copy mechanism will not be used. 00:25:48.103 Running I/O for 2 seconds... 00:25:50.646 00:25:50.646 Latency(us) 00:25:50.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.646 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:50.646 nvme0n1 : 2.01 2016.78 252.10 0.00 0.00 7931.14 6582.09 33052.94 00:25:50.646 =================================================================================================================== 00:25:50.646 Total : 2016.78 252.10 0.00 0.00 7931.14 6582.09 33052.94 00:25:50.646 0 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:50.646 | select(.opcode=="crc32c") 00:25:50.646 | "\(.module_name) \(.executed)"' 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3102405 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3102405 ']' 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3102405 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3102405 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3102405' 00:25:50.646 killing process with pid 3102405 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3102405 00:25:50.646 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.646 00:25:50.646 Latency(us) 00:25:50.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.646 =================================================================================================================== 00:25:50.646 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3102405 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:50.646 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3102984 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3102984 /var/tmp/bperf.sock 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3102984 ']' 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.647 14:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.647 [2024-07-26 14:07:17.967181] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:50.647 [2024-07-26 14:07:17.967228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102984 ] 00:25:50.647 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.647 [2024-07-26 14:07:18.019983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.908 [2024-07-26 14:07:18.092524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.478 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.478 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:51.478 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:51.478 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:51.478 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:51.738 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.738 14:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.998 nvme0n1 00:25:51.998 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:51.998 14:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.998 Running I/O for 2 seconds... 00:25:54.539 00:25:54.539 Latency(us) 00:25:54.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.539 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:54.539 nvme0n1 : 2.00 25919.74 101.25 0.00 0.00 4929.65 3148.58 35332.45 00:25:54.539 =================================================================================================================== 00:25:54.539 Total : 25919.74 101.25 0.00 0.00 4929.65 3148.58 35332.45 00:25:54.539 0 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:54.540 | select(.opcode=="crc32c") 00:25:54.540 | "\(.module_name) \(.executed)"' 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3102984 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3102984 ']' 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3102984 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3102984 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3102984' 00:25:54.540 killing process with pid 3102984 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3102984 00:25:54.540 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.540 00:25:54.540 Latency(us) 00:25:54.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.540 =================================================================================================================== 00:25:54.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3102984 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3103682 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3103682 /var/tmp/bperf.sock 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3103682 ']' 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.540 14:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:54.540 [2024-07-26 14:07:21.824443] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:54.540 [2024-07-26 14:07:21.824491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103682 ] 00:25:54.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:54.540 Zero copy mechanism will not be used. 00:25:54.540 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.540 [2024-07-26 14:07:21.877983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.540 [2024-07-26 14:07:21.950202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.482 14:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.742 nvme0n1 00:25:55.742 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:55.742 14:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.002 Zero copy mechanism will not be used. 00:25:56.002 Running I/O for 2 seconds... 00:25:57.951 00:25:57.951 Latency(us) 00:25:57.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.951 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:57.951 nvme0n1 : 2.01 1286.58 160.82 0.00 0.00 12402.50 9801.91 37611.97 00:25:57.951 =================================================================================================================== 00:25:57.951 Total : 1286.58 160.82 0.00 0.00 12402.50 9801.91 37611.97 00:25:57.951 0 00:25:57.951 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:57.951 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:57.951 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:57.951 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:57.951 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:57.951 | select(.opcode=="crc32c") 00:25:57.951 | "\(.module_name) \(.executed)"' 00:25:58.211 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:58.211 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:58.211 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:58.211 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3103682 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3103682 ']' 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3103682 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3103682 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3103682' 00:25:58.212 killing process with pid 3103682 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3103682 00:25:58.212 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.212 00:25:58.212 Latency(us) 00:25:58.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.212 =================================================================================================================== 00:25:58.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.212 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3103682 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3101563 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3101563 ']' 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3101563 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3101563 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3101563' 00:25:58.472 killing process with pid 3101563 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3101563 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3101563 00:25:58.472 00:25:58.472 real 0m16.695s 00:25:58.472 user 0m33.157s 00:25:58.472 sys 0m3.329s 00:25:58.472 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:58.732 ************************************ 00:25:58.732 END TEST nvmf_digest_clean 00:25:58.732 ************************************ 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:58.732 ************************************ 00:25:58.732 START TEST nvmf_digest_error 00:25:58.732 ************************************ 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3104403 00:25:58.732 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3104403 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3104403 ']' 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.733 14:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 [2024-07-26 14:07:26.034428] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:58.733 [2024-07-26 14:07:26.034471] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.733 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.733 [2024-07-26 14:07:26.092382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.993 [2024-07-26 14:07:26.172133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.993 [2024-07-26 14:07:26.172170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.993 [2024-07-26 14:07:26.172178] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.993 [2024-07-26 14:07:26.172184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.993 [2024-07-26 14:07:26.172189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.993 [2024-07-26 14:07:26.172207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.563 [2024-07-26 14:07:26.882253] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.563 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.563 null0 00:25:59.563 [2024-07-26 14:07:26.972218] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.563 [2024-07-26 14:07:26.996404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.824 14:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3104649 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3104649 /var/tmp/bperf.sock 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3104649 ']' 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.824 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.824 [2024-07-26 14:07:27.045631] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:25:59.824 [2024-07-26 14:07:27.045677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104649 ] 00:25:59.824 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.824 [2024-07-26 14:07:27.098742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.824 [2024-07-26 14:07:27.176792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.763 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.763 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:00.763 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.763 14:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.763 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:00.763 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.763 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.763 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.763 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.763 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.022 nvme0n1 00:26:01.022 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:01.022 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.022 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:01.022 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.022 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:01.022 14:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:01.282 Running I/O for 2 seconds... 00:26:01.282 [2024-07-26 14:07:28.545681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.545715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.545726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.556000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.556023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.556032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.567090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.567111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.567120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.575735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.575755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.575763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.586260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.586281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.595387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.595416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.609841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.609862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.609870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.620129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.620154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.620163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.629396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.629415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.629423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.639848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.639873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.639881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.651637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.651658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.651666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.665535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.665555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.665563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.677312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.677333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.677341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.686431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.686451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.686458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.695849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.695869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.695877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.705843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.705863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.705871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.282 [2024-07-26 14:07:28.715601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.282 [2024-07-26 14:07:28.715621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.282 [2024-07-26 14:07:28.715629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.730703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.730725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.730732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.740524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.740544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.740552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.751314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.751334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.751342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.760403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.760423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.760431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.769958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.769978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.779273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.779293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.779301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.791757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.791777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.791785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.800180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.800203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.800211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.542 [2024-07-26 14:07:28.814688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.542 [2024-07-26 14:07:28.814708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.542 [2024-07-26 14:07:28.814716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.825841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.825861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.825869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.835127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.835147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.835155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.844631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.844650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.844658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.854859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.854880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.854888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.863912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.863932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.863941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.873594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.873613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.873621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.883680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.883701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.883709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.892363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.892383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.892391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.902549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.902569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.902576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.911286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.911306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.911314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.925703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.925723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.925731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.937144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.937163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.937171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.947398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.947418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.947426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.956462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.956482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.956490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.543 [2024-07-26 14:07:28.970914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.543 [2024-07-26 14:07:28.970934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.543 [2024-07-26 14:07:28.970942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:28.982020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:28.982040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:28.982062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:28.995737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:28.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:28.995765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.006896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.006916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.006924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.017439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.017460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.017468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.027195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.027215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.027223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.036192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.036212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.036220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.045943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.045963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.045971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.055187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.055206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.055214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.065700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.065719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.065727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.075636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.075659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.075667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.085020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.085039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.085054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.098033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.098057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.098065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.107941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.107960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.107968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.118099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.118119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.118127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.130606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.130626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.130635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.142799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.142820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.142829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.152183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.152204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.152212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.161467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.161487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.161496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.171913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.171941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.180520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.180540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.180548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.190691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.803 [2024-07-26 14:07:29.190711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.803 [2024-07-26 14:07:29.190719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.803 [2024-07-26 14:07:29.200099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.804 [2024-07-26 14:07:29.200118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.804 [2024-07-26 14:07:29.200126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.804 [2024-07-26 14:07:29.209299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.804 [2024-07-26 14:07:29.209318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.804 [2024-07-26 14:07:29.209326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.804 [2024-07-26 14:07:29.219005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.804 [2024-07-26 14:07:29.219025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.804 [2024-07-26 14:07:29.219032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.804 [2024-07-26 14:07:29.229002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:01.804 [2024-07-26 14:07:29.229022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.804 [2024-07-26 14:07:29.229029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.241151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.241173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.241182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.249176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.249196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.249207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.259115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.259135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.259143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.268467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.268487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.268495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.277932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.277952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.277960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.287499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.287519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.287526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.296024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.296052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.296061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.305786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.305807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.305815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.063 [2024-07-26 14:07:29.315364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.063 [2024-07-26 14:07:29.315384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.063 [2024-07-26 14:07:29.315392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.324913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.324934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.324942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.333715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.333734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.333742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.343090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.343110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.343118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.352523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.352543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.352551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.361543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.361562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.361570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.371849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.371869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.371877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.380016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.380035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.380049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.390734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.390754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.390762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.399384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.399403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.399411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.409469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.409490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.409512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.418160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.418180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.418188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.427277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.427298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.427307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.437256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.437276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.437284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.446982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.447002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.447010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.455594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.455615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.455623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.465099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.465120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.465128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.474773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.474794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.474802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.483566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.483586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.483594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.064 [2024-07-26 14:07:29.494051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.064 [2024-07-26 14:07:29.494075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.064 [2024-07-26 14:07:29.494083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.502868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.502890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.502898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.511937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.511957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.511965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.521521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.521542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.521549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.530897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.530917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.540361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.540381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.540389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.549573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.549594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.549601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.559479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.559499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.559508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.569300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.569321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.569328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.577553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.577573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.577581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.587756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.587777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.587785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.596156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.596175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.596183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.605738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.605766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.325 [2024-07-26 14:07:29.616081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.325 [2024-07-26 14:07:29.616102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.325 [2024-07-26 14:07:29.616110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.625024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.625050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.625062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.633828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.633848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.633855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.643449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.643469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.643477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.652653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.652674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.652686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.661711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.661731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.661739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.671385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.671406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.671414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.680450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.680471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.680478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.689950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.689970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.689980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.698984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.699012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.708246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.708266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.708274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.717980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.718000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.718008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.727160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.727181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.727189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.736680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.736701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.736709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.746167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.746195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.326 [2024-07-26 14:07:29.754949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.326 [2024-07-26 14:07:29.754969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.326 [2024-07-26 14:07:29.754977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.586 [2024-07-26 14:07:29.764992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.586 [2024-07-26 14:07:29.765013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.765020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.774612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.774633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.774641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.783781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.783802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.783810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.792903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.792923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.792931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.802138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.802158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.802165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.811348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.811368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.811379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.821479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.821500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.821508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.830720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.830740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.830749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.839961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.839981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.839989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.849185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.849206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.849213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.858403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.858422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.858430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.867583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.867602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.867609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.876133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.876154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.876161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.885555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.885575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.885583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.895274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.895300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.895308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.904561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.904581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.904589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.914047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.914067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.914075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.923179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.923200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.923207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.932128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.932149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.932157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.942000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.942019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.942027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.951382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.951401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.951410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.960585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.960604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.960612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.969464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.969483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.969491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.979450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.979469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.979477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.587 [2024-07-26 14:07:29.988220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.587 [2024-07-26 14:07:29.988239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.587 [2024-07-26 14:07:29.988247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.588 [2024-07-26 14:07:29.997805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.588 [2024-07-26 14:07:29.997825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.588 [2024-07-26 14:07:29.997833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.588 [2024-07-26 14:07:30.007103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.588 [2024-07-26 14:07:30.007125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.588 [2024-07-26 14:07:30.007133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.588 [2024-07-26 14:07:30.020136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.588 [2024-07-26 14:07:30.020160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.588 [2024-07-26 14:07:30.020170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.029181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.029204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.029213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.040337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.040358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.040367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.048700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.048729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.059595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.059617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.059630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.070319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.070340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.070348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.079428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.079448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.079457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.848 [2024-07-26 14:07:30.088768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.848 [2024-07-26 14:07:30.088788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.848 [2024-07-26 14:07:30.088796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.098912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.098933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.098941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.108234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.108254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.108262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.117760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.117780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.117788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.128098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.128118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.128126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.137800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.137820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.137828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.146965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.146985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.146993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.156765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.156786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.156794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.166265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.166285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.166293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.176192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.176211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.176219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.185663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.185682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.185690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.195388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.195407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.195415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.204186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.204206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.204213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.214986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.215006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.215014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.223535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.223554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.223565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.233660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.233680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.233688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.243386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.243406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.243414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.252976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.252995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.253003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.262479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.262498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.262506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.271973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.271992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.272000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.849 [2024-07-26 14:07:30.281492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:02.849 [2024-07-26 14:07:30.281513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.849 [2024-07-26 14:07:30.281521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.291767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.291788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.301364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.301383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.301392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.310118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.310143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.310152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.321117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.321139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.321147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.330319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.330339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.330348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.339594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.339613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.339621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.350096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.350116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.350124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.358776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.358796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.358804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.368896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.368916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.368924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.382207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.382226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.382235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.394891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.394912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.394920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.403713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.403733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.403741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.414183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.414204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.414212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.423592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.423612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.423620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.433288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.433307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.433314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.450643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.450671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.459757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.459776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.459785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.469864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.469883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.469890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.482956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.482983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.494386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.494405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.494416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.504827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.504846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.504853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 [2024-07-26 14:07:30.514208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11bb4f0) 00:26:03.110 [2024-07-26 14:07:30.514227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.110 [2024-07-26 14:07:30.514235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.110 00:26:03.110 Latency(us) 00:26:03.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.110 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:03.110 nvme0n1 : 2.01 25352.72 99.03 0.00 0.00 5038.25 2792.40 27696.08 00:26:03.110 =================================================================================================================== 00:26:03.110 Total : 25352.72 99.03 0.00 0.00 5038.25 2792.40 27696.08 00:26:03.110 0 00:26:03.111 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:03.371 | .driver_specific 00:26:03.371 | .nvme_error 00:26:03.371 | .status_code 00:26:03.371 | .command_transient_transport_error' 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3104649 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3104649 ']' 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3104649 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3104649 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3104649' 00:26:03.371 killing process with pid 3104649 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3104649 00:26:03.371 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.371 00:26:03.371 Latency(us) 00:26:03.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.371 =================================================================================================================== 00:26:03.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.371 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3104649 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3105167 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3105167 /var/tmp/bperf.sock 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3105167 ']' 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:03.632 14:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.632 [2024-07-26 14:07:30.990932] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:03.632 [2024-07-26 14:07:30.990983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105167 ] 00:26:03.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.632 Zero copy mechanism will not be used. 00:26:03.632 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.632 [2024-07-26 14:07:31.044916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.892 [2024-07-26 14:07:31.119981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.462 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:04.462 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:04.462 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.463 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.722 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:04.722 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.722 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.722 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.722 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.722 14:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.982 nvme0n1 00:26:04.982 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:04.982 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.982 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.982 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.982 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:04.982 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.982 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.982 Zero copy mechanism will not be used. 00:26:04.982 Running I/O for 2 seconds... 00:26:04.982 [2024-07-26 14:07:32.340736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:04.982 [2024-07-26 14:07:32.340773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.982 [2024-07-26 14:07:32.340783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.982 [2024-07-26 14:07:32.355831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:04.983 [2024-07-26 14:07:32.355860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.983 [2024-07-26 14:07:32.355870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.983 [2024-07-26 14:07:32.370680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:04.983 [2024-07-26 14:07:32.370703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.983 [2024-07-26 14:07:32.370712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.983 [2024-07-26 14:07:32.385533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:04.983 [2024-07-26 14:07:32.385555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.983 [2024-07-26 14:07:32.385563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.983 [2024-07-26 14:07:32.400307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:04.983 [2024-07-26 14:07:32.400328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.983 [2024-07-26 14:07:32.400336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.983 [2024-07-26 14:07:32.415535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:04.983 [2024-07-26 14:07:32.415556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.983 [2024-07-26 14:07:32.415565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.243 [2024-07-26 14:07:32.431064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.243 [2024-07-26 14:07:32.431086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.243 [2024-07-26 14:07:32.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.243 [2024-07-26 14:07:32.446106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.243 [2024-07-26 14:07:32.446127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.243 [2024-07-26 14:07:32.446135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.243 [2024-07-26 14:07:32.461276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.243 [2024-07-26 14:07:32.461297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.243 [2024-07-26 14:07:32.461305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.243 [2024-07-26 14:07:32.476223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.243 [2024-07-26 14:07:32.476244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.243 [2024-07-26 14:07:32.476252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.243 [2024-07-26 14:07:32.491112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.243 [2024-07-26 14:07:32.491133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.243 [2024-07-26 14:07:32.491141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.243 [2024-07-26 14:07:32.505922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.505943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.505951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.520972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.520992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.521000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.535704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.535725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.535733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.550572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.550591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.550603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.565324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.565345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.565353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.580313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.580334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.580342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.595049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.595070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.595077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.614490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.614510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.614518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.629313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.629333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.629340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.644278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.644299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.644307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.658993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.659013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.244 [2024-07-26 14:07:32.673724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.244 [2024-07-26 14:07:32.673745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.244 [2024-07-26 14:07:32.673753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.697113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.697138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.697146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.715699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.715718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.715726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.731275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.731295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.731303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.746858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.746877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.746885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.762953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.762973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.762980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.779186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.779205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.779213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.795185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.795205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.795213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.817556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.817576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.817584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.835947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.835967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.835974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.858240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.858260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.858268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.878378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.878399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.878406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.894651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.894671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.894679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.910239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.910259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.505 [2024-07-26 14:07:32.926534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.505 [2024-07-26 14:07:32.926554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.505 [2024-07-26 14:07:32.926561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:32.942020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:32.942041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:32.942054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:32.957681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:32.957702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:32.957710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:32.973277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:32.973297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:32.973304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:32.989599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:32.989619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:32.989630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.005656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.005676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.005684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.021505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.021524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.021532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.036502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.036530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.051855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.051874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.051882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.067647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.067667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.067674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.094458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.094478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.094485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.112379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.112406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.128222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.766 [2024-07-26 14:07:33.128242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.766 [2024-07-26 14:07:33.128249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.766 [2024-07-26 14:07:33.143256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.767 [2024-07-26 14:07:33.143275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.767 [2024-07-26 14:07:33.143283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.767 [2024-07-26 14:07:33.158884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.767 [2024-07-26 14:07:33.158904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.767 [2024-07-26 14:07:33.158911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.767 [2024-07-26 14:07:33.181624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.767 [2024-07-26 14:07:33.181644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.767 [2024-07-26 14:07:33.181651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.767 [2024-07-26 14:07:33.200622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:05.767 [2024-07-26 14:07:33.200642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.767 [2024-07-26 14:07:33.200650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.216929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.216950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.216958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.241671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.241691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.241699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.258010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.258029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.258037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.281311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.281331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.281339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.297215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.297235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.297246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.322572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.322591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.322599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.339436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.339456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.339463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.355505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.355524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.355532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.371632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.371653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.371661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.394267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.394288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.394296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.413302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.413322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.413329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.429657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.429677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.429685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.034 [2024-07-26 14:07:33.454376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.034 [2024-07-26 14:07:33.454396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.034 [2024-07-26 14:07:33.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.294 [2024-07-26 14:07:33.471972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.294 [2024-07-26 14:07:33.471998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.294 [2024-07-26 14:07:33.472006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.294 [2024-07-26 14:07:33.486824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.294 [2024-07-26 14:07:33.486845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.294 [2024-07-26 14:07:33.486853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.294 [2024-07-26 14:07:33.501549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.294 [2024-07-26 14:07:33.501570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.294 [2024-07-26 14:07:33.501578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.294 [2024-07-26 14:07:33.516301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.516322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.516330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.531003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.531023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.531030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.545942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.545962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.545970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.560683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.560703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.560711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.575756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.575775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.575783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.594642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.594662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.594670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.609647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.609667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.624415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.624435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.624443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.639416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.639437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.639444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.654269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.654290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.654298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.669054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.669074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.669082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.683798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.683818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.683826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.698583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.698604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.698612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.713438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.713459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.713467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.295 [2024-07-26 14:07:33.728240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.295 [2024-07-26 14:07:33.728261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.295 [2024-07-26 14:07:33.728272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.743037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.743064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.743072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.758024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.758050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.758059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.772890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.772910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.772918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.788181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.788200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.788208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.803015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.803036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.803050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.817918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.817939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.817947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.832712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.832732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.832740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.847518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.847539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.847547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.862440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.862466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.862474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.877276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.877297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.877306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.892090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.892111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.892119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.906975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.906995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.907003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.921835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.921858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.936676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.936697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.556 [2024-07-26 14:07:33.936704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.556 [2024-07-26 14:07:33.951435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.556 [2024-07-26 14:07:33.951456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.557 [2024-07-26 14:07:33.951464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.557 [2024-07-26 14:07:33.966409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.557 [2024-07-26 14:07:33.966431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.557 [2024-07-26 14:07:33.966439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.557 [2024-07-26 14:07:33.981467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.557 [2024-07-26 14:07:33.981488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.557 [2024-07-26 14:07:33.981500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:33.996436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:33.996458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:33.996465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.011627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.011647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.011655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.026666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.026687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.026695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.041586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.041608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.041615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.056595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.056616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.056623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.074746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.074767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.074775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.089788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.089808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.089816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.105020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.817 [2024-07-26 14:07:34.105040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.817 [2024-07-26 14:07:34.105057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.817 [2024-07-26 14:07:34.119784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.119807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.119815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.134535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.134555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.134562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.149370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.149389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.164408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.164427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.164435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.179154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.179173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.179181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.194025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.194051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.194059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.208980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.209000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.209008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.223729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.223749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.223756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.818 [2024-07-26 14:07:34.238572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:06.818 [2024-07-26 14:07:34.238592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.818 [2024-07-26 14:07:34.238600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.078 [2024-07-26 14:07:34.253356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:07.078 [2024-07-26 14:07:34.253377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.078 [2024-07-26 14:07:34.253385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.078 [2024-07-26 14:07:34.268537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:07.078 [2024-07-26 14:07:34.268557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.078 [2024-07-26 14:07:34.268565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.078 [2024-07-26 14:07:34.283371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:07.078 [2024-07-26 14:07:34.283391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.078 [2024-07-26 14:07:34.283399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.078 [2024-07-26 14:07:34.298139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7e030) 00:26:07.078 [2024-07-26 14:07:34.298159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.078 [2024-07-26 14:07:34.298167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.078 00:26:07.078 Latency(us) 00:26:07.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.078 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:07.078 nvme0n1 : 2.00 1898.23 237.28 0.00 0.00 8426.90 7237.45 32141.13 00:26:07.078 =================================================================================================================== 00:26:07.078 Total : 1898.23 237.28 0.00 0.00 8426.90 7237.45 32141.13 00:26:07.078 0 00:26:07.078 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:07.078 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:07.078 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:07.078 | .driver_specific 00:26:07.078 | .nvme_error 00:26:07.078 | .status_code 00:26:07.079 | .command_transient_transport_error' 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3105167 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3105167 ']' 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3105167 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.079 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3105167 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3105167' 00:26:07.339 killing process with pid 3105167 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3105167 00:26:07.339 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.339 00:26:07.339 Latency(us) 00:26:07.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.339 =================================================================================================================== 00:26:07.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3105167 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3105830 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3105830 /var/tmp/bperf.sock 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3105830 ']' 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.339 14:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.599 [2024-07-26 14:07:34.776116] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:07.600 [2024-07-26 14:07:34.776165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105830 ] 00:26:07.600 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.600 [2024-07-26 14:07:34.830352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.600 [2024-07-26 14:07:34.909939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.170 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.170 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:08.170 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.170 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.431 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:08.431 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.431 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.431 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.431 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.431 14:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.002 nvme0n1 00:26:09.002 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:09.002 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.002 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.002 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.002 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:09.002 14:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.002 Running I/O for 2 seconds... 00:26:09.002 [2024-07-26 14:07:36.275458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fe720 00:26:09.002 [2024-07-26 14:07:36.276387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.276415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.286267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.287165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.287188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.295964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.296214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.296233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.305653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.305891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.305910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.315270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.315507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.315526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.324864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.325103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.325122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.334410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.334649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.334668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.343991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.344237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.344256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.353551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.353788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.353807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.363255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.363497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.363515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.372789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.373026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.373049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.382351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.382587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.382606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.391935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.392178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.392198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.401681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.401921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.401944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.411242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.411479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.411497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.420708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.420946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.420965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.002 [2024-07-26 14:07:36.430293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.002 [2024-07-26 14:07:36.430531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.002 [2024-07-26 14:07:36.430551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.440215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.440457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.440477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.449809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.450056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.450075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.459365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.459599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.459618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.468878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.469113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.469131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.478418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.478654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.478672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.487959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.488206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.488224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.497506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.497741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.497759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.507061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.507299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.516590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.516826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.516844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.526104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.526373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.263 [2024-07-26 14:07:36.535817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.263 [2024-07-26 14:07:36.536064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.263 [2024-07-26 14:07:36.536083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.545364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.545602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.545620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.554902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.555140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.555159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.564447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.564683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.564701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.573981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.574226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.574245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.583595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.583849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.583867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.593433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.593674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.593693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.603053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.603290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.603308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.612608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.612842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.612860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.622117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.622352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.622370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.631658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.631893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.631911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.641212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.641472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.650747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.650988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.651009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.660305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.660545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.660564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.669823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.670062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.670081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.679379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.679617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.679635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.264 [2024-07-26 14:07:36.688901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.264 [2024-07-26 14:07:36.689138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.264 [2024-07-26 14:07:36.689157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.698685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.698930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.698949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.708363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.708621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.717913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.718148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.718166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.727477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.727715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.727733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.737105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.737355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.737373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.746630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.746870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.746889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.756161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.756397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.756415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.765684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.765920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.765938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.775273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.775510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.775529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.784787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.785022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.785041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.794513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.794747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.794766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.804031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.804275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.804293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.813558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.813809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.813827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.823076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.823314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.823333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.832625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.832862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.832881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.842181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.842417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.842435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.851689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.851928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.851946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.861211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.861446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.861464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.870747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.870984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.525 [2024-07-26 14:07:36.871002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.525 [2024-07-26 14:07:36.880249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.525 [2024-07-26 14:07:36.880483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.880502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.889790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.890025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.890048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.899323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.899563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.899581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.908866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.909102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.909121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.918395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.918631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.918649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.927904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.928140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.928160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.937522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.937758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.937776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.947077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.947330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.947349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.526 [2024-07-26 14:07:36.956708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.526 [2024-07-26 14:07:36.956948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.526 [2024-07-26 14:07:36.956967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:36.966473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fd640 00:26:09.786 [2024-07-26 14:07:36.966714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:36.966733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:36.977952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fe2e8 00:26:09.786 [2024-07-26 14:07:36.979565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:36.979583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:36.990491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fda78 00:26:09.786 [2024-07-26 14:07:36.991499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:36.991521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:36.999872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e8d30 00:26:09.786 [2024-07-26 14:07:37.000090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:37.000109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:37.009520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e8d30 00:26:09.786 [2024-07-26 14:07:37.009743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:37.009763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:37.019109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e8d30 00:26:09.786 [2024-07-26 14:07:37.019314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:37.019332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:09.786 [2024-07-26 14:07:37.028624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e8d30 00:26:09.786 [2024-07-26 14:07:37.029340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.786 [2024-07-26 14:07:37.029359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.042813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e9168 00:26:09.787 [2024-07-26 14:07:37.044062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.055196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e99d8 00:26:09.787 [2024-07-26 14:07:37.056226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.064851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e99d8 00:26:09.787 [2024-07-26 14:07:37.065272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.065290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.074389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e99d8 00:26:09.787 [2024-07-26 14:07:37.074610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.083895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e99d8 00:26:09.787 [2024-07-26 14:07:37.084860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.084878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.096159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190ea680 00:26:09.787 [2024-07-26 14:07:37.097474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.097493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.105815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fa3a0 00:26:09.787 [2024-07-26 14:07:37.106616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.106635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.115008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f92c0 00:26:09.787 [2024-07-26 14:07:37.116080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.116099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.124158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fa3a0 00:26:09.787 [2024-07-26 14:07:37.125183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.125201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.134697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190ff3c8 00:26:09.787 [2024-07-26 14:07:37.136648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.136666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.147372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e99d8 00:26:09.787 [2024-07-26 14:07:37.148672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.148691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.157518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190ecc78 00:26:09.787 [2024-07-26 14:07:37.158515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.167725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0bc0 00:26:09.787 [2024-07-26 14:07:37.169720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.169739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.178547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f3a28 00:26:09.787 [2024-07-26 14:07:37.179713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.179731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.188159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f46d0 00:26:09.787 [2024-07-26 14:07:37.188412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.188430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.197692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f46d0 00:26:09.787 [2024-07-26 14:07:37.198480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.198499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.207253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f46d0 00:26:09.787 [2024-07-26 14:07:37.208035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.208057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.787 [2024-07-26 14:07:37.216819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f46d0 00:26:09.787 [2024-07-26 14:07:37.217073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.787 [2024-07-26 14:07:37.217093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.226621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f46d0 00:26:10.048 [2024-07-26 14:07:37.227104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.227123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.236162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f46d0 00:26:10.048 [2024-07-26 14:07:37.236880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.236898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.247002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f8e88 00:26:10.048 [2024-07-26 14:07:37.248515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.248534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.255776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e7818 00:26:10.048 [2024-07-26 14:07:37.256786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.256808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.264929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f6cc8 00:26:10.048 [2024-07-26 14:07:37.266465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.266482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.277361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f6458 00:26:10.048 [2024-07-26 14:07:37.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.288581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f8618 00:26:10.048 [2024-07-26 14:07:37.288970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.288989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.048 [2024-07-26 14:07:37.298124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f8618 00:26:10.048 [2024-07-26 14:07:37.298938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.048 [2024-07-26 14:07:37.298956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.307780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f8618 00:26:10.049 [2024-07-26 14:07:37.310488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.310507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.322086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f8a50 00:26:10.049 [2024-07-26 14:07:37.323774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.323792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.332423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f7970 00:26:10.049 [2024-07-26 14:07:37.332763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.332781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.341984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f7970 00:26:10.049 [2024-07-26 14:07:37.342510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.342529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.353849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e7c50 00:26:10.049 [2024-07-26 14:07:37.355814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.355832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.366931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190fc128 00:26:10.049 [2024-07-26 14:07:37.367706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.367724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.376041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e9e10 00:26:10.049 [2024-07-26 14:07:37.377210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.377228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.389295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190e9168 00:26:10.049 [2024-07-26 14:07:37.390571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.390590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.399344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.399586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.399605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.409119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.409564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.409583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.418728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.419168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.419186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.428287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.428525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.428544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.437828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.438068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.438086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.447420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.447662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.447679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.456875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f57b0 00:26:10.049 [2024-07-26 14:07:37.458461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.458479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.470766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f6458 00:26:10.049 [2024-07-26 14:07:37.472229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.049 [2024-07-26 14:07:37.472248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.049 [2024-07-26 14:07:37.482573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f6890 00:26:10.310 [2024-07-26 14:07:37.483780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.483800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.491881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190ef6a8 00:26:10.310 [2024-07-26 14:07:37.493268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.493287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.500666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190ea248 00:26:10.310 [2024-07-26 14:07:37.502340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.502358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.514648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f9b30 00:26:10.310 [2024-07-26 14:07:37.515766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.515785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.525161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.526240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.526259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.534804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.535023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.535049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.544378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.544601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.544619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.553898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.554120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.554138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.563654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.563875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.563893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.573153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.573580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.573598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.582763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.582983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.583002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.592315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.592532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.592550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.602035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.602555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.602574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.611754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f1430 00:26:10.310 [2024-07-26 14:07:37.613607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.613625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.626299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0bc0 00:26:10.310 [2024-07-26 14:07:37.627984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.628003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.636275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.310 [2024-07-26 14:07:37.636620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.636638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.645843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.310 [2024-07-26 14:07:37.646704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.310 [2024-07-26 14:07:37.646723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.310 [2024-07-26 14:07:37.655398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.655606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.655624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.664947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.665152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.665171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.674472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.674820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.674838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.684002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.684213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.684232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.693559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.693762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.693781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.703098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.703322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.712633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.713143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.713161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.722159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.722617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.722635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.731833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0ff8 00:26:10.311 [2024-07-26 14:07:37.732974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.311 [2024-07-26 14:07:37.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.311 [2024-07-26 14:07:37.744260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f0bc0 00:26:10.573 [2024-07-26 14:07:37.745232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.573 [2024-07-26 14:07:37.745252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.573 [2024-07-26 14:07:37.754022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.573 [2024-07-26 14:07:37.754270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.573 [2024-07-26 14:07:37.754289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.573 [2024-07-26 14:07:37.763606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.573 [2024-07-26 14:07:37.763836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.573 [2024-07-26 14:07:37.763854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.573 [2024-07-26 14:07:37.773198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.573 [2024-07-26 14:07:37.773427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.773445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.782747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.782976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.782994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.792305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.792533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.792554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.801867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.802095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.802113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.811430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.811657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.811682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.821158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.821386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.821404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.830710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.830939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.830957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.840274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.840502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.840520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.849820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.850051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.850069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.859376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.859603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.859621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.868917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.869145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.869163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.878665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.878895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.878913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.888210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.888436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.888454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.897758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.897986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.898004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.907327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.907559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.907577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.916901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.917134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.917153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.926554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.926783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.926800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.936108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.936353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.936372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.945717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.945944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.945963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.955280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.955509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.955528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.964831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.965061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.965081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.974400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.974626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.974644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.983959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.984193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.984211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:37.993499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:37.993727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:37.993746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.574 [2024-07-26 14:07:38.003162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.574 [2024-07-26 14:07:38.003393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.574 [2024-07-26 14:07:38.003412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.013028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.013267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.013287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.022875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.023119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.023138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.032737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.032972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.032991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.042672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.042911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.052853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.053122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.053142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.063565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.063815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.063834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.073946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.074190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.074210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.084018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.874 [2024-07-26 14:07:38.084261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.874 [2024-07-26 14:07:38.084280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.874 [2024-07-26 14:07:38.093840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.094092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.103845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.104078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.104097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.113414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.113645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.113663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.122978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.123230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.123249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.132510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.132744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.132769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.142120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.142352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.142370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.151645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.151875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.151894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.161223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.161454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.161472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.170777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.171014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.180300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.180534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.180551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.189839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.190072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.190092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.199390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.199616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.199634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.208928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.209157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.209176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.218492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.218722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.218740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.228028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.228261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.228280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 [2024-07-26 14:07:38.237587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1819420) with pdu=0x2000190f2948 00:26:10.875 [2024-07-26 14:07:38.237814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.875 [2024-07-26 14:07:38.237832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.875 00:26:10.875 Latency(us) 00:26:10.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.875 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:10.875 nvme0n1 : 2.00 25399.03 99.21 0.00 0.00 5030.80 2450.48 33508.84 00:26:10.875 =================================================================================================================== 00:26:10.875 Total : 25399.03 99.21 0.00 0.00 5030.80 2450.48 33508.84 00:26:10.875 0 00:26:10.875 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:10.875 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:10.875 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:10.875 | .driver_specific 00:26:10.875 | .nvme_error 00:26:10.875 | .status_code 00:26:10.875 | .command_transient_transport_error' 00:26:10.875 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3105830 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3105830 ']' 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3105830 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3105830 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3105830' 00:26:11.136 killing process with pid 3105830 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3105830 00:26:11.136 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.136 00:26:11.136 Latency(us) 00:26:11.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.136 =================================================================================================================== 00:26:11.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.136 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3105830 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3106526 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3106526 /var/tmp/bperf.sock 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3106526 ']' 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:11.396 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.397 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:11.397 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.397 [2024-07-26 14:07:38.712837] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:11.397 [2024-07-26 14:07:38.712883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106526 ] 00:26:11.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.397 Zero copy mechanism will not be used. 00:26:11.397 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.397 [2024-07-26 14:07:38.765954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.656 [2024-07-26 14:07:38.846483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.227 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.227 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:12.227 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.227 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.487 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:12.487 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.487 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.487 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.487 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.487 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.748 nvme0n1 00:26:12.748 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:12.748 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.748 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.748 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.748 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:12.748 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:13.008 Zero copy mechanism will not be used. 00:26:13.008 Running I/O for 2 seconds... 00:26:13.008 [2024-07-26 14:07:40.255350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.255943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.255972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.276261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.276773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.276796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.299410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.300116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.300138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.321958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.322334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.322356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.346425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.347029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.347053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.370862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.371446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.371470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.395410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.396089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.396109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.416688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.417188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.417208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.008 [2024-07-26 14:07:40.441731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.008 [2024-07-26 14:07:40.442347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.008 [2024-07-26 14:07:40.442367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.466215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.466907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.466927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.487240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.487853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.487873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.511033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.511936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.511955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.535456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.536244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.536263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.560290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.561093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.561112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.584459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.585113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.585132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.607219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.607911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.607930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.631228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.631914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.631933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.653330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.654023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.654045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.675578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.676306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.676326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.269 [2024-07-26 14:07:40.698623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.269 [2024-07-26 14:07:40.699413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.269 [2024-07-26 14:07:40.699433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.724463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.529 [2024-07-26 14:07:40.725355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.529 [2024-07-26 14:07:40.725374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.749381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.529 [2024-07-26 14:07:40.750271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.529 [2024-07-26 14:07:40.750290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.772048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.529 [2024-07-26 14:07:40.772850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.529 [2024-07-26 14:07:40.772869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.794157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.529 [2024-07-26 14:07:40.794716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.529 [2024-07-26 14:07:40.794735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.819028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.529 [2024-07-26 14:07:40.819730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.529 [2024-07-26 14:07:40.819749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.844001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.529 [2024-07-26 14:07:40.844678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.529 [2024-07-26 14:07:40.844698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.529 [2024-07-26 14:07:40.867878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.530 [2024-07-26 14:07:40.868589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.530 [2024-07-26 14:07:40.868608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.530 [2024-07-26 14:07:40.893523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.530 [2024-07-26 14:07:40.894119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.530 [2024-07-26 14:07:40.894138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.530 [2024-07-26 14:07:40.919861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.530 [2024-07-26 14:07:40.920790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.530 [2024-07-26 14:07:40.920810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.530 [2024-07-26 14:07:40.941642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.530 [2024-07-26 14:07:40.942311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.530 [2024-07-26 14:07:40.942331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:40.967290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:40.967917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:40.967936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:40.991639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:40.992153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:40.992184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.015976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.016720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.016740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.039461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.040159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.040178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.065438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.066374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.066393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.089781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.090754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.090773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.115515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.116339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.116358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.141636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.142672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.142692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.166800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.167558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.167576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.191259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.191987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.192005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.790 [2024-07-26 14:07:41.214581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:13.790 [2024-07-26 14:07:41.215100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.790 [2024-07-26 14:07:41.215120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.240381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.050 [2024-07-26 14:07:41.241204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.050 [2024-07-26 14:07:41.241223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.264898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.050 [2024-07-26 14:07:41.265494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.050 [2024-07-26 14:07:41.265514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.289741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.050 [2024-07-26 14:07:41.290459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.050 [2024-07-26 14:07:41.290478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.315359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.050 [2024-07-26 14:07:41.315999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.050 [2024-07-26 14:07:41.316018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.338628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.050 [2024-07-26 14:07:41.339500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.050 [2024-07-26 14:07:41.339519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.366125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.050 [2024-07-26 14:07:41.366783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.050 [2024-07-26 14:07:41.366801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.050 [2024-07-26 14:07:41.391517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.051 [2024-07-26 14:07:41.392295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.051 [2024-07-26 14:07:41.392315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.051 [2024-07-26 14:07:41.415092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.051 [2024-07-26 14:07:41.415965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.051 [2024-07-26 14:07:41.415985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.051 [2024-07-26 14:07:41.438592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.051 [2024-07-26 14:07:41.439387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.051 [2024-07-26 14:07:41.439406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.051 [2024-07-26 14:07:41.465904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.051 [2024-07-26 14:07:41.466590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.051 [2024-07-26 14:07:41.466609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.492684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.493664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.493684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.516621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.517282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.517313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.542469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.543089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.543109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.568682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.569458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.569476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.595613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.596316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.596335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.619746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.620326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.620346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.646502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.647325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.647344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.673949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.674738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.674756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.699737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.700633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.700652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.314 [2024-07-26 14:07:41.723961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.314 [2024-07-26 14:07:41.724585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.314 [2024-07-26 14:07:41.724604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.755762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.756398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.756418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.782775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.783468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.783486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.808727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.809512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.833876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.834764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.834783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.858663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.859376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.859395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.883483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.884210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.884229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.908187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.908866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.908885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.933247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.574 [2024-07-26 14:07:41.934071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.574 [2024-07-26 14:07:41.959025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.574 [2024-07-26 14:07:41.959914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.575 [2024-07-26 14:07:41.959933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.575 [2024-07-26 14:07:41.985323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.575 [2024-07-26 14:07:41.986285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.575 [2024-07-26 14:07:41.986304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.575 [2024-07-26 14:07:42.007908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.575 [2024-07-26 14:07:42.008687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.575 [2024-07-26 14:07:42.008706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.033142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.033607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.033626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.058665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.059457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.059476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.081532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.082240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.082264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.105387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.106412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.130777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.131484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.131503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.156627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.157301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.157319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.182292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.183281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.835 [2024-07-26 14:07:42.207681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x181b0a0) with pdu=0x2000190fef90 00:26:14.835 [2024-07-26 14:07:42.208584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.835 [2024-07-26 14:07:42.208603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.835 00:26:14.835 Latency(us) 00:26:14.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.835 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:14.835 nvme0n1 : 2.01 1245.93 155.74 0.00 0.00 12802.44 9061.06 31685.23 00:26:14.835 =================================================================================================================== 00:26:14.835 Total : 1245.93 155.74 0.00 0.00 12802.44 9061.06 31685.23 00:26:14.835 0 00:26:14.835 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:14.835 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:14.835 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:14.835 | .driver_specific 00:26:14.835 | .nvme_error 00:26:14.835 | .status_code 00:26:14.835 | .command_transient_transport_error' 00:26:14.835 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 80 > 0 )) 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3106526 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3106526 ']' 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3106526 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3106526 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3106526' 00:26:15.095 killing process with pid 3106526 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3106526 00:26:15.095 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.095 00:26:15.095 Latency(us) 00:26:15.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.095 =================================================================================================================== 00:26:15.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.095 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3106526 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3104403 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3104403 ']' 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3104403 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3104403 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3104403' 00:26:15.356 killing process with pid 3104403 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3104403 00:26:15.356 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3104403 00:26:15.616 00:26:15.616 real 0m16.909s 00:26:15.616 user 0m33.573s 00:26:15.616 sys 0m3.403s 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.616 ************************************ 00:26:15.616 END TEST nvmf_digest_error 00:26:15.616 ************************************ 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.616 rmmod nvme_tcp 00:26:15.616 rmmod nvme_fabrics 00:26:15.616 rmmod nvme_keyring 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3104403 ']' 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3104403 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3104403 ']' 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3104403 00:26:15.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3104403) - No such process 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3104403 is not found' 00:26:15.616 Process with pid 3104403 is not found 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.616 14:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:18.162 00:26:18.162 real 0m41.618s 00:26:18.162 user 1m8.367s 00:26:18.162 sys 0m11.107s 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:18.162 ************************************ 00:26:18.162 END TEST nvmf_digest 00:26:18.162 ************************************ 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.162 ************************************ 00:26:18.162 START TEST nvmf_bdevperf 00:26:18.162 ************************************ 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:18.162 * Looking for test storage... 00:26:18.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.162 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.163 14:07:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.451 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:23.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:23.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:23.452 Found net devices under 0000:86:00.0: cvl_0_0 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:23.452 Found net devices under 0000:86:00.1: cvl_0_1 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:26:23.452 00:26:23.452 --- 10.0.0.2 ping statistics --- 00:26:23.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.452 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:26:23.452 00:26:23.452 --- 10.0.0.1 ping statistics --- 00:26:23.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.452 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3110530 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3110530 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3110530 ']' 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:23.452 14:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.453 [2024-07-26 14:07:50.626469] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:23.453 [2024-07-26 14:07:50.626515] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.453 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.453 [2024-07-26 14:07:50.683280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.453 [2024-07-26 14:07:50.762891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.453 [2024-07-26 14:07:50.762928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.453 [2024-07-26 14:07:50.762935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.453 [2024-07-26 14:07:50.762941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.453 [2024-07-26 14:07:50.762946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.453 [2024-07-26 14:07:50.763059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.453 [2024-07-26 14:07:50.763144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.453 [2024-07-26 14:07:50.763146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.021 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:24.021 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:24.021 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.021 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:24.021 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.282 [2024-07-26 14:07:51.478427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.282 Malloc0 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.282 [2024-07-26 14:07:51.536156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:24.282 { 00:26:24.282 "params": { 00:26:24.282 "name": "Nvme$subsystem", 00:26:24.282 "trtype": "$TEST_TRANSPORT", 00:26:24.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.282 "adrfam": "ipv4", 00:26:24.282 "trsvcid": "$NVMF_PORT", 00:26:24.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.282 "hdgst": ${hdgst:-false}, 00:26:24.282 "ddgst": ${ddgst:-false} 00:26:24.282 }, 00:26:24.282 "method": "bdev_nvme_attach_controller" 00:26:24.282 } 00:26:24.282 EOF 00:26:24.282 )") 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:24.282 14:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:24.282 "params": { 00:26:24.282 "name": "Nvme1", 00:26:24.282 "trtype": "tcp", 00:26:24.282 "traddr": "10.0.0.2", 00:26:24.282 "adrfam": "ipv4", 00:26:24.282 "trsvcid": "4420", 00:26:24.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.282 "hdgst": false, 00:26:24.282 "ddgst": false 00:26:24.282 }, 00:26:24.282 "method": "bdev_nvme_attach_controller" 00:26:24.282 }' 00:26:24.282 [2024-07-26 14:07:51.583851] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:24.282 [2024-07-26 14:07:51.583894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110775 ] 00:26:24.282 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.282 [2024-07-26 14:07:51.638885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.282 [2024-07-26 14:07:51.712774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.852 Running I/O for 1 seconds... 00:26:25.790 00:26:25.790 Latency(us) 00:26:25.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.790 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:25.790 Verification LBA range: start 0x0 length 0x4000 00:26:25.790 Nvme1n1 : 1.01 11159.35 43.59 0.00 0.00 11415.73 1866.35 15158.76 00:26:25.790 =================================================================================================================== 00:26:25.790 Total : 11159.35 43.59 0.00 0.00 11415.73 1866.35 15158.76 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3111013 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:25.790 { 00:26:25.790 "params": { 00:26:25.790 "name": "Nvme$subsystem", 00:26:25.790 "trtype": "$TEST_TRANSPORT", 00:26:25.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:25.790 "adrfam": "ipv4", 00:26:25.790 "trsvcid": "$NVMF_PORT", 00:26:25.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:25.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:25.790 "hdgst": ${hdgst:-false}, 00:26:25.790 "ddgst": ${ddgst:-false} 00:26:25.790 }, 00:26:25.790 "method": "bdev_nvme_attach_controller" 00:26:25.790 } 00:26:25.790 EOF 00:26:25.790 )") 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:25.790 14:07:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:25.790 "params": { 00:26:25.790 "name": "Nvme1", 00:26:25.790 "trtype": "tcp", 00:26:25.790 "traddr": "10.0.0.2", 00:26:25.790 "adrfam": "ipv4", 00:26:25.790 "trsvcid": "4420", 00:26:25.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:25.790 "hdgst": false, 00:26:25.790 "ddgst": false 00:26:25.790 }, 00:26:25.790 "method": "bdev_nvme_attach_controller" 00:26:25.790 }' 00:26:25.790 [2024-07-26 14:07:53.223402] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:25.790 [2024-07-26 14:07:53.223453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111013 ] 00:26:26.050 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.050 [2024-07-26 14:07:53.279087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.050 [2024-07-26 14:07:53.349026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.309 Running I/O for 15 seconds... 00:26:28.852 14:07:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3110530 00:26:28.852 14:07:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:28.852 [2024-07-26 14:07:56.195934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.195980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.195998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.852 [2024-07-26 14:07:56.196358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.852 [2024-07-26 14:07:56.196484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.852 [2024-07-26 14:07:56.196491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.196986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.196992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.197000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.197007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.197015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.197022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.197030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.197036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.197159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.197167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.197175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.197182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.853 [2024-07-26 14:07:56.197190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.853 [2024-07-26 14:07:56.197198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.854 [2024-07-26 14:07:56.197673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.854 [2024-07-26 14:07:56.197768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.854 [2024-07-26 14:07:56.197775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.855 [2024-07-26 14:07:56.197913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.197927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.197941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.197956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.197971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.197985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.197992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.197999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.855 [2024-07-26 14:07:56.198015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4ee0 is same with the state(5) to be set 00:26:28.855 [2024-07-26 14:07:56.198031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.855 [2024-07-26 14:07:56.198036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.855 [2024-07-26 14:07:56.198046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101736 len:8 PRP1 0x0 PRP2 0x0 00:26:28.855 [2024-07-26 14:07:56.198053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198096] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdd4ee0 was disconnected and freed. reset controller. 00:26:28.855 [2024-07-26 14:07:56.198142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.855 [2024-07-26 14:07:56.198151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.855 [2024-07-26 14:07:56.198165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.855 [2024-07-26 14:07:56.198180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.855 [2024-07-26 14:07:56.198195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.855 [2024-07-26 14:07:56.198201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.855 [2024-07-26 14:07:56.201046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.855 [2024-07-26 14:07:56.201071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.855 [2024-07-26 14:07:56.201990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:07:56.202032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.855 [2024-07-26 14:07:56.202070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.855 [2024-07-26 14:07:56.202650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.855 [2024-07-26 14:07:56.203118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.855 [2024-07-26 14:07:56.203128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.855 [2024-07-26 14:07:56.203136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.855 [2024-07-26 14:07:56.205962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.855 [2024-07-26 14:07:56.214328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.855 [2024-07-26 14:07:56.215094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:07:56.215139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.855 [2024-07-26 14:07:56.215161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.855 [2024-07-26 14:07:56.215522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.855 [2024-07-26 14:07:56.215696] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.855 [2024-07-26 14:07:56.215706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.855 [2024-07-26 14:07:56.215713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.855 [2024-07-26 14:07:56.218460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.855 [2024-07-26 14:07:56.227148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.855 [2024-07-26 14:07:56.227881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:07:56.227925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.855 [2024-07-26 14:07:56.227947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.855 [2024-07-26 14:07:56.228545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.855 [2024-07-26 14:07:56.228851] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.855 [2024-07-26 14:07:56.228860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.855 [2024-07-26 14:07:56.228867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.855 [2024-07-26 14:07:56.231494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.855 [2024-07-26 14:07:56.240001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.855 [2024-07-26 14:07:56.240734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:07:56.240778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.855 [2024-07-26 14:07:56.240800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.855 [2024-07-26 14:07:56.241397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.855 [2024-07-26 14:07:56.241871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.855 [2024-07-26 14:07:56.241881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.855 [2024-07-26 14:07:56.241887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.855 [2024-07-26 14:07:56.244512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.855 [2024-07-26 14:07:56.252896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.855 [2024-07-26 14:07:56.253614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:07:56.253659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.856 [2024-07-26 14:07:56.253680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.856 [2024-07-26 14:07:56.253994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.856 [2024-07-26 14:07:56.254191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.856 [2024-07-26 14:07:56.254202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.856 [2024-07-26 14:07:56.254208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.856 [2024-07-26 14:07:56.256867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.856 [2024-07-26 14:07:56.265827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.856 [2024-07-26 14:07:56.266483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:07:56.266528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.856 [2024-07-26 14:07:56.266550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.856 [2024-07-26 14:07:56.267140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.856 [2024-07-26 14:07:56.267725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.856 [2024-07-26 14:07:56.267738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.856 [2024-07-26 14:07:56.267747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.856 [2024-07-26 14:07:56.271799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.856 [2024-07-26 14:07:56.279442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.856 [2024-07-26 14:07:56.280097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:07:56.280120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:28.856 [2024-07-26 14:07:56.280128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:28.856 [2024-07-26 14:07:56.280299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:28.856 [2024-07-26 14:07:56.280472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.856 [2024-07-26 14:07:56.280482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.856 [2024-07-26 14:07:56.280489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.283272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.292349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.293093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.293150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.293172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.293679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.293858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.293867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.293876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.296700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.305207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.305933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.305975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.305996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.306591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.307017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.307027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.307033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.309651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.318109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.318813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.318855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.318877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.319369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.319544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.319554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.319560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.322203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.331016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.331743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.331786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.331808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.332056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.332244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.332254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.332260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.334918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.343977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.344714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.344757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.344787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.345107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.345281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.345290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.345297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.347994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.357008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.357669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.357716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.357741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.358153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.358327] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.358337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.358344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.360998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.369815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.370553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.370597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.370619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.371163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.371337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.371347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.371353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.374006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.382758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.383492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.383535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.383558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.384068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.384259] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.384277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.384284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.386938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.395596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.396300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-07-26 14:07:56.396343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.119 [2024-07-26 14:07:56.396364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.119 [2024-07-26 14:07:56.396943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.119 [2024-07-26 14:07:56.397410] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.119 [2024-07-26 14:07:56.397423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.119 [2024-07-26 14:07:56.397430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.119 [2024-07-26 14:07:56.400085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.119 [2024-07-26 14:07:56.408506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.119 [2024-07-26 14:07:56.409175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.409221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.409245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.409826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.410259] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.410269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.410275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.412933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.421442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.422181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.422225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.422247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.422824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.423051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.423061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.423068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.425749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.434262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.434991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.435034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.435071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.435366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.435539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.435549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.435555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.438198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.447245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.448002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.448019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.448025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.448224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.448402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.448413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.448420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.451245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.460419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.461158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.461175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.461182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.461359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.461537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.461546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.461554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.464380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.473383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.474116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.474159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.474180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.474502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.474675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.474685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.474691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.477437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.486181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.486908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.486950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.486971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.487333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.487508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.487517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.487524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.490172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.498985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.499729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.499771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.499792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.500253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.500509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.500521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.500531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.504580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.512281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.513028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.513085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.513107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.513489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.513662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.513672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.513682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.516418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.525223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.525880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.525925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.120 [2024-07-26 14:07:56.525947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.120 [2024-07-26 14:07:56.526501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.120 [2024-07-26 14:07:56.526665] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.120 [2024-07-26 14:07:56.526674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.120 [2024-07-26 14:07:56.526680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.120 [2024-07-26 14:07:56.529375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.120 [2024-07-26 14:07:56.538183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.120 [2024-07-26 14:07:56.538846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-07-26 14:07:56.538888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.121 [2024-07-26 14:07:56.538911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.121 [2024-07-26 14:07:56.539313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.121 [2024-07-26 14:07:56.539478] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.121 [2024-07-26 14:07:56.539487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.121 [2024-07-26 14:07:56.539493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.121 [2024-07-26 14:07:56.542222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.121 [2024-07-26 14:07:56.551257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.121 [2024-07-26 14:07:56.552076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-07-26 14:07:56.552122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.121 [2024-07-26 14:07:56.552145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.121 [2024-07-26 14:07:56.552394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.121 [2024-07-26 14:07:56.552566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.121 [2024-07-26 14:07:56.552578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.121 [2024-07-26 14:07:56.552586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.555393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.564058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.564819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.564867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.564890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.565487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.565685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.565694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.565702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.568541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.577279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.577960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.578003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.578026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.578549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.578724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.578733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.578740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.581485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.590294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.591022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.591080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.591103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.591553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.591718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.591727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.591733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.594362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.603179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.603781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.603824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.603847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.604441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.605020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.605029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.605035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.607687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.616142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.616802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.616846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.616868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.617460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.617859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.617868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.617874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.620553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.629067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.629736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.629778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.629801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.630391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.630883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.630892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.630898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.634726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.642891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.643557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.643600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.643623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.644215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.644736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.644746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.644752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.647511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.655916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.656523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.656566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.656588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.656987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.657156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.657166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.657172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.383 [2024-07-26 14:07:56.659819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.383 [2024-07-26 14:07:56.668790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.383 [2024-07-26 14:07:56.669528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.383 [2024-07-26 14:07:56.669573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.383 [2024-07-26 14:07:56.669595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.383 [2024-07-26 14:07:56.670023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.383 [2024-07-26 14:07:56.670190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.383 [2024-07-26 14:07:56.670201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.383 [2024-07-26 14:07:56.670207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.672855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.681804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.682470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.682513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.682535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.682985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.683153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.683162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.683168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.685855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.694845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.695584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.695629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.695658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.696248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.696832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.696863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.696869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.699556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.707892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.708538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.708555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.708563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.708753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.708932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.708942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.708949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.711795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.720964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.721687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.721704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.721712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.721889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.722073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.722083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.722090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.724912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.734253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.735001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.735017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.735025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.735211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.735395] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.735408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.735415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.738398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.747569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.748357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.748374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.748382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.748564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.748747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.748757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.748764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.751828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.760937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.761666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.761699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.761707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.761901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.762101] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.762111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.762118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.765174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.774219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.774972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.774991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.775000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.775192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.775376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.775385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.775392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.778309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.787433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.788184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.788202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.788209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.788392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.788575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.788586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.788592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.791626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.384 [2024-07-26 14:07:56.800585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.384 [2024-07-26 14:07:56.801342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.384 [2024-07-26 14:07:56.801385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.384 [2024-07-26 14:07:56.801408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.384 [2024-07-26 14:07:56.801986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.384 [2024-07-26 14:07:56.802344] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.384 [2024-07-26 14:07:56.802355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.384 [2024-07-26 14:07:56.802363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.384 [2024-07-26 14:07:56.805228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.385 [2024-07-26 14:07:56.813747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.385 [2024-07-26 14:07:56.814449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.385 [2024-07-26 14:07:56.814466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.385 [2024-07-26 14:07:56.814473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.385 [2024-07-26 14:07:56.814650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.385 [2024-07-26 14:07:56.814830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.385 [2024-07-26 14:07:56.814840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.385 [2024-07-26 14:07:56.814847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.686 [2024-07-26 14:07:56.817677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.686 [2024-07-26 14:07:56.826842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.686 [2024-07-26 14:07:56.827570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.686 [2024-07-26 14:07:56.827587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.686 [2024-07-26 14:07:56.827595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.686 [2024-07-26 14:07:56.827775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.686 [2024-07-26 14:07:56.827952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.686 [2024-07-26 14:07:56.827961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.686 [2024-07-26 14:07:56.827967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.686 [2024-07-26 14:07:56.830808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.686 [2024-07-26 14:07:56.839996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.686 [2024-07-26 14:07:56.840644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.686 [2024-07-26 14:07:56.840661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.686 [2024-07-26 14:07:56.840669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.686 [2024-07-26 14:07:56.840845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.686 [2024-07-26 14:07:56.841023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.686 [2024-07-26 14:07:56.841033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.686 [2024-07-26 14:07:56.841039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.686 [2024-07-26 14:07:56.843865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.686 [2024-07-26 14:07:56.853031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.686 [2024-07-26 14:07:56.853770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.686 [2024-07-26 14:07:56.853803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.686 [2024-07-26 14:07:56.853811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.686 [2024-07-26 14:07:56.853992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.686 [2024-07-26 14:07:56.854181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.686 [2024-07-26 14:07:56.854191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.686 [2024-07-26 14:07:56.854198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.686 [2024-07-26 14:07:56.857124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.686 [2024-07-26 14:07:56.866384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.686 [2024-07-26 14:07:56.867179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.686 [2024-07-26 14:07:56.867197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.686 [2024-07-26 14:07:56.867205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.686 [2024-07-26 14:07:56.867400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.686 [2024-07-26 14:07:56.867595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.686 [2024-07-26 14:07:56.867605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.686 [2024-07-26 14:07:56.867617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.686 [2024-07-26 14:07:56.870726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.686 [2024-07-26 14:07:56.879716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.686 [2024-07-26 14:07:56.880406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.686 [2024-07-26 14:07:56.880424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.686 [2024-07-26 14:07:56.880433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.686 [2024-07-26 14:07:56.880627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.686 [2024-07-26 14:07:56.880823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.686 [2024-07-26 14:07:56.880834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.686 [2024-07-26 14:07:56.880841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.686 [2024-07-26 14:07:56.883951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.686 [2024-07-26 14:07:56.893142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.686 [2024-07-26 14:07:56.893876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.686 [2024-07-26 14:07:56.893894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.893902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.894105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.894302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.894312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.894320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.897430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.906279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.906955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.906997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.907018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.907426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.907605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.907615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.907621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.910459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.919322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.920069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.920111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.920133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.920711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.921306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.921333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.921353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.924216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.932158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.932817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.932834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.932842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.933014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.933195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.933205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.933211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.935869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.945111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.945781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.945824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.945846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.946264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.946439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.946449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.946456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.949137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.957947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.958667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.958685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.958692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.958867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.959041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.959056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.959064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.961896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.970937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.971681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.971724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.971746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.972336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.972617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.972626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.972632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.975374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.983764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.984499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.984543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.984565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.984859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.985120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.985133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.985142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:56.989193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:56.997168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:56.997906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:56.997949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:56.997973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:56.998568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.687 [2024-07-26 14:07:56.998831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.687 [2024-07-26 14:07:56.998840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.687 [2024-07-26 14:07:56.998847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.687 [2024-07-26 14:07:57.001623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.687 [2024-07-26 14:07:57.010252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.687 [2024-07-26 14:07:57.010994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.687 [2024-07-26 14:07:57.011037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.687 [2024-07-26 14:07:57.011078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.687 [2024-07-26 14:07:57.011659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.011886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.011896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.011902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.014527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.023144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.023879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.023922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.023944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.024537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.025048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.025058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.025065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.027676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.036028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.036760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.036803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.036827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.037416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.037590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.037600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.037606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.040246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.048902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.049630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.049680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.049702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.049958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.050145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.050155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.050162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.052823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.061784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.062511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.062554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.062576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.062958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.063146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.063156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.063162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.065824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.074630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.075291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.075332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.075354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.075733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.075896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.075905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.075911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.078707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.087724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.088437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.088454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.088461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.088637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.088818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.088829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.088835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.091660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.100673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.101436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.101479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.688 [2024-07-26 14:07:57.101502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.688 [2024-07-26 14:07:57.101772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.688 [2024-07-26 14:07:57.101945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.688 [2024-07-26 14:07:57.101954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.688 [2024-07-26 14:07:57.101961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.688 [2024-07-26 14:07:57.104581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.688 [2024-07-26 14:07:57.113546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.688 [2024-07-26 14:07:57.114220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.688 [2024-07-26 14:07:57.114236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.689 [2024-07-26 14:07:57.114243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.689 [2024-07-26 14:07:57.114406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.689 [2024-07-26 14:07:57.114569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.689 [2024-07-26 14:07:57.114578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.689 [2024-07-26 14:07:57.114584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.689 [2024-07-26 14:07:57.117319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.126607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.127280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.127322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.127345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.127859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.128024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.128034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.128040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.130738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.139454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.140182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.140225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.140248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.140825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.141223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.141234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.141240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.143913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.152362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.153097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.153140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.153163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.153740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.154108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.154118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.154124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.156789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.165291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.166015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.166067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.166091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.166315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.166480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.166489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.166495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.169090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.178105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.178769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.178812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.178841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.179125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.179299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.179308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.179314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.181969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.190932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.191665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.191708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.191730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.192067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.192258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.192268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.192274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.194929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.203741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.204362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.204378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.204386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.204548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.204711] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.204720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.204726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.207413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.216627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.217325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.217343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.217350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.217522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.217695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.217704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.217715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.220552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.229649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.230399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.230442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.230465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.230901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.231079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.231089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.231095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.233835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.242722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.243405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.243447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.243469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.244243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.244443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.244452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.244459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.247128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.255631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.256372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.256414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.256437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.257014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.257340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.950 [2024-07-26 14:07:57.257350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.950 [2024-07-26 14:07:57.257356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.950 [2024-07-26 14:07:57.260062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.950 [2024-07-26 14:07:57.268559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.950 [2024-07-26 14:07:57.269292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.950 [2024-07-26 14:07:57.269334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.950 [2024-07-26 14:07:57.269365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.950 [2024-07-26 14:07:57.269527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.950 [2024-07-26 14:07:57.269691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.269701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.269707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.272396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.281455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.282182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.282225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.282247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.282492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.282656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.282665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.282671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.285358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.294378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.295105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.295148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.295170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.295748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.296093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.296119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.296126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.298791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.307298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.308027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.308081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.308104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.308441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.308605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.308614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.308620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.311314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.320350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.321075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.321118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.321141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.321578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.321742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.321751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.321757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.324447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.333270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.334009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.334062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.334086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.334442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.334615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.334624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.334631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.337326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.346079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.346809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.346851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.346873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.347466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.347722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.347730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.347736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.350327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.358890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.359615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.359657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.359680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.360275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.360599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.360609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.360616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.363257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.951 [2024-07-26 14:07:57.371764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.951 [2024-07-26 14:07:57.372497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.951 [2024-07-26 14:07:57.372513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:29.951 [2024-07-26 14:07:57.372520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:29.951 [2024-07-26 14:07:57.372683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:29.951 [2024-07-26 14:07:57.372846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.951 [2024-07-26 14:07:57.372855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.951 [2024-07-26 14:07:57.372861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.951 [2024-07-26 14:07:57.375548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.212 [2024-07-26 14:07:57.384809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.212 [2024-07-26 14:07:57.385575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.212 [2024-07-26 14:07:57.385619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.212 [2024-07-26 14:07:57.385642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.212 [2024-07-26 14:07:57.386238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.212 [2024-07-26 14:07:57.386598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.212 [2024-07-26 14:07:57.386607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.212 [2024-07-26 14:07:57.386614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.212 [2024-07-26 14:07:57.389364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.212 [2024-07-26 14:07:57.397917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.212 [2024-07-26 14:07:57.398651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.212 [2024-07-26 14:07:57.398704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.212 [2024-07-26 14:07:57.398728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.212 [2024-07-26 14:07:57.399324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.212 [2024-07-26 14:07:57.399798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.212 [2024-07-26 14:07:57.399811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.212 [2024-07-26 14:07:57.399820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.212 [2024-07-26 14:07:57.403870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.212 [2024-07-26 14:07:57.411481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.212 [2024-07-26 14:07:57.412202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.212 [2024-07-26 14:07:57.412247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.212 [2024-07-26 14:07:57.412270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.412482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.412651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.412660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.412667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.415394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.424384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.425096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.425140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.425163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.425754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.425919] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.425928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.425934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.428626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.437284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.438020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.438075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.438099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.438453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.438623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.438632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.438638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.441327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.450239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.450924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.450967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.450990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.451586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.451943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.451952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.451958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.454578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.463020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.463757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.463800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.463823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.464415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.464804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.464814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.464820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.467449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.476143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.476901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.476943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.476964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.477523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.477698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.477707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.477713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.480458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.489213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.489959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.490001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.490023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.490613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.490924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.490936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.490946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.495005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.502702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.503438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.503481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.503503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.503983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.504177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.504187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.504193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.506899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.515500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.516234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.516276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.516298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.516589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.516754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.516763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.516768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.519459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.528404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.529128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.529171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.529200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.529779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.213 [2024-07-26 14:07:57.530290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.213 [2024-07-26 14:07:57.530301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.213 [2024-07-26 14:07:57.530307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.213 [2024-07-26 14:07:57.534148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.213 [2024-07-26 14:07:57.542131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.213 [2024-07-26 14:07:57.542873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.213 [2024-07-26 14:07:57.542915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.213 [2024-07-26 14:07:57.542936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.213 [2024-07-26 14:07:57.543160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.543333] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.543343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.543349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.546051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.554979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.555711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.555754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.555776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.556196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.556370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.556379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.556385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.559033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.567842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.568581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.568625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.568648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.569153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.569327] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.569337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.569347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.571998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.580656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.581386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.581429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.581451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.582029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.582455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.582465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.582472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.585119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.593560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.594203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.594244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.594267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.594801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.594965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.594974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.594980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.597665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.606777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.607530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.607576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.607600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.608141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.608316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.608325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.608332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.611082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.619824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.620620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.620664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.620688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.620961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.621140] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.621149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.621156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.623815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.632789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.633500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.214 [2024-07-26 14:07:57.633516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.214 [2024-07-26 14:07:57.633524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.214 [2024-07-26 14:07:57.633687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.214 [2024-07-26 14:07:57.633850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.214 [2024-07-26 14:07:57.633859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.214 [2024-07-26 14:07:57.633866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.214 [2024-07-26 14:07:57.636495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.214 [2024-07-26 14:07:57.645873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.214 [2024-07-26 14:07:57.646622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:07:57.646665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.476 [2024-07-26 14:07:57.646688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.476 [2024-07-26 14:07:57.647003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.476 [2024-07-26 14:07:57.647188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.476 [2024-07-26 14:07:57.647198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.476 [2024-07-26 14:07:57.647205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.476 [2024-07-26 14:07:57.650025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.476 [2024-07-26 14:07:57.658829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.476 [2024-07-26 14:07:57.659299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:07:57.659342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.476 [2024-07-26 14:07:57.659364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.476 [2024-07-26 14:07:57.659923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.476 [2024-07-26 14:07:57.660100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.476 [2024-07-26 14:07:57.660110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.476 [2024-07-26 14:07:57.660117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.476 [2024-07-26 14:07:57.662893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.476 [2024-07-26 14:07:57.671887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.476 [2024-07-26 14:07:57.672714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:07:57.672757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.476 [2024-07-26 14:07:57.672781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.476 [2024-07-26 14:07:57.673085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.476 [2024-07-26 14:07:57.673259] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.476 [2024-07-26 14:07:57.673269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.476 [2024-07-26 14:07:57.673276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.476 [2024-07-26 14:07:57.675936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.476 [2024-07-26 14:07:57.684909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.476 [2024-07-26 14:07:57.685567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:07:57.685611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.476 [2024-07-26 14:07:57.685633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.476 [2024-07-26 14:07:57.686010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.476 [2024-07-26 14:07:57.686202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.476 [2024-07-26 14:07:57.686212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.476 [2024-07-26 14:07:57.686218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.476 [2024-07-26 14:07:57.688878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.476 [2024-07-26 14:07:57.697799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.476 [2024-07-26 14:07:57.698502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:07:57.698545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.476 [2024-07-26 14:07:57.698567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.476 [2024-07-26 14:07:57.699159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.476 [2024-07-26 14:07:57.699414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.476 [2024-07-26 14:07:57.699423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.699433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.702078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.710582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.711310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.711354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.711376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.711845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.712015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.712024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.712031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.714716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.723394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.724141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.724158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.724167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.724340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.724514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.724523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.724529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.727369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.736500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.737247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.737313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.737540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.737714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.737724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.737731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.740470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.749549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.750252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.750303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.750325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.750904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.751458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.751468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.751474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.754121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.762499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.763235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.763278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.763301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.763650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.763815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.763824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.763830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.766519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.775320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.775976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.776018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.776040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.776481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.776654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.776663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.776670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.779308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.788308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.788991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.789008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.789015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.789183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.789350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.789359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.789365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.792022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.801317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.801980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.802021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.802057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.802637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.802979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.802988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.802994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.806726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.815191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.815855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.815899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.815921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.816343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.816517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.816526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.816532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.819231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.477 [2024-07-26 14:07:57.828109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.477 [2024-07-26 14:07:57.828842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:07:57.828884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.477 [2024-07-26 14:07:57.828907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.477 [2024-07-26 14:07:57.829500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.477 [2024-07-26 14:07:57.829829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.477 [2024-07-26 14:07:57.829839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.477 [2024-07-26 14:07:57.829845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.477 [2024-07-26 14:07:57.832471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.478 [2024-07-26 14:07:57.840970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.478 [2024-07-26 14:07:57.841686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:07:57.841730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.478 [2024-07-26 14:07:57.841753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.478 [2024-07-26 14:07:57.842346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.478 [2024-07-26 14:07:57.842742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.478 [2024-07-26 14:07:57.842752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.478 [2024-07-26 14:07:57.842758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.478 [2024-07-26 14:07:57.845388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.478 [2024-07-26 14:07:57.853832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.478 [2024-07-26 14:07:57.854480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:07:57.854497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.478 [2024-07-26 14:07:57.854504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.478 [2024-07-26 14:07:57.854667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.478 [2024-07-26 14:07:57.854830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.478 [2024-07-26 14:07:57.854839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.478 [2024-07-26 14:07:57.854845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.478 [2024-07-26 14:07:57.857591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.478 [2024-07-26 14:07:57.866691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.478 [2024-07-26 14:07:57.867434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:07:57.867478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.478 [2024-07-26 14:07:57.867500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.478 [2024-07-26 14:07:57.867758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.478 [2024-07-26 14:07:57.867932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.478 [2024-07-26 14:07:57.867942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.478 [2024-07-26 14:07:57.867948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.478 [2024-07-26 14:07:57.870729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.478 [2024-07-26 14:07:57.879647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.478 [2024-07-26 14:07:57.880355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:07:57.880398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.478 [2024-07-26 14:07:57.880427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.478 [2024-07-26 14:07:57.880868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.478 [2024-07-26 14:07:57.881032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.478 [2024-07-26 14:07:57.881041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.478 [2024-07-26 14:07:57.881054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.478 [2024-07-26 14:07:57.883737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.478 [2024-07-26 14:07:57.892539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.478 [2024-07-26 14:07:57.893272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:07:57.893316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.478 [2024-07-26 14:07:57.893339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.478 [2024-07-26 14:07:57.893636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.478 [2024-07-26 14:07:57.893800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.478 [2024-07-26 14:07:57.893809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.478 [2024-07-26 14:07:57.893815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.478 [2024-07-26 14:07:57.896616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.478 [2024-07-26 14:07:57.905627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.478 [2024-07-26 14:07:57.906360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:07:57.906377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.478 [2024-07-26 14:07:57.906385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.478 [2024-07-26 14:07:57.906562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.478 [2024-07-26 14:07:57.906741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.478 [2024-07-26 14:07:57.906750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.478 [2024-07-26 14:07:57.906757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.478 [2024-07-26 14:07:57.909584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.918802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.919564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.919582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.919590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.919768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.919947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.919960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.919966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:57.922794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.931984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.932643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.932661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.932668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.932845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.933024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.933033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.933039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:57.935867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.945050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.945806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.945824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.945831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.946008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.946192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.946202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.946208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:57.949026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.958242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.958975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.958992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.958999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.959182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.959360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.959369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.959376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:57.962203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.971377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.972094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.972112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.972119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.972297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.972476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.972485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.972492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:57.975322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.984494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.985192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.985235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.985257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.985639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.985895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.985907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.985916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:57.989978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:57.997866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:57.998531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:57.998575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:57.998598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.740 [2024-07-26 14:07:57.999061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.740 [2024-07-26 14:07:57.999240] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.740 [2024-07-26 14:07:57.999250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.740 [2024-07-26 14:07:57.999257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.740 [2024-07-26 14:07:58.002091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.740 [2024-07-26 14:07:58.010928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.740 [2024-07-26 14:07:58.011605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.740 [2024-07-26 14:07:58.011646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.740 [2024-07-26 14:07:58.011669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.012054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.012228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.012238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.012245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.014994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.023981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.024591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.024635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.024658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.025076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.025251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.025260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.025266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.028011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.036861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.037555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.037598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.037620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.038144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.038309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.038318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.038324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.040979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.049784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.050480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.050524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.050545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.050910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.051080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.051090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.051100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.053788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.062713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.063424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.063468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.063490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.063934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.064122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.064132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.064139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.066806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.075577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.076540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.076584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.076606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.076996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.077176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.077185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.077192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.079800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.088460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.089174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.089218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.089241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.089558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.089723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.089733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.089738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.092367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.101356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.101957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.102013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.102035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.102375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.102539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.102548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.102555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.105231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.114326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.114915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.114958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.114980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.115442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.115606] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.115616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.115622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.118320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.127236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.127810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.127826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.741 [2024-07-26 14:07:58.127834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.741 [2024-07-26 14:07:58.127997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.741 [2024-07-26 14:07:58.128200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.741 [2024-07-26 14:07:58.128210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.741 [2024-07-26 14:07:58.128216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.741 [2024-07-26 14:07:58.130878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.741 [2024-07-26 14:07:58.140126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.741 [2024-07-26 14:07:58.140723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.741 [2024-07-26 14:07:58.140765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.742 [2024-07-26 14:07:58.140787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.742 [2024-07-26 14:07:58.141182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.742 [2024-07-26 14:07:58.141366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.742 [2024-07-26 14:07:58.141376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.742 [2024-07-26 14:07:58.141382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.742 [2024-07-26 14:07:58.144090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.742 [2024-07-26 14:07:58.152937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.742 [2024-07-26 14:07:58.153558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.742 [2024-07-26 14:07:58.153601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.742 [2024-07-26 14:07:58.153624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.742 [2024-07-26 14:07:58.154217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.742 [2024-07-26 14:07:58.154591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.742 [2024-07-26 14:07:58.154600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.742 [2024-07-26 14:07:58.154606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.742 [2024-07-26 14:07:58.157229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.742 [2024-07-26 14:07:58.165871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.742 [2024-07-26 14:07:58.166887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.742 [2024-07-26 14:07:58.166931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:30.742 [2024-07-26 14:07:58.166952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:30.742 [2024-07-26 14:07:58.167284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:30.742 [2024-07-26 14:07:58.167488] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.742 [2024-07-26 14:07:58.167501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.742 [2024-07-26 14:07:58.167510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.742 [2024-07-26 14:07:58.171569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.003 [2024-07-26 14:07:58.179561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.180277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.180321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.180342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.180921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.181418] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.181429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.181436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.184272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.192628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.193390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.193432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.193453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.194032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.194530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.194541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.194548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.197380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.205740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.206429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.206472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.206493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.206780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.206958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.206968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.206975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.209805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.218764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.219488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.219531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.219554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.220145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.220664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.220674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.220680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.223370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.231683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.232384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.232402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.232412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.232584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.232757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.232767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.232774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.235610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.244660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.245401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.245444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.245465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.245844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.246018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.246027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.246034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.248781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.257565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.258289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.258356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.258678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.258844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.258853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.258859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.261495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.270469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.271112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.271128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.271136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.271299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.271462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.271475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.271481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.274157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.283403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.284131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.284147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.284154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.284316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.284479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.284487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.284493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.287183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.296372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.004 [2024-07-26 14:07:58.297060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.004 [2024-07-26 14:07:58.297103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.004 [2024-07-26 14:07:58.297126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.004 [2024-07-26 14:07:58.297704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.004 [2024-07-26 14:07:58.297905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.004 [2024-07-26 14:07:58.297915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.004 [2024-07-26 14:07:58.297921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.004 [2024-07-26 14:07:58.300548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.004 [2024-07-26 14:07:58.309188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.309785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.309828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.309851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.310445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.310933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.310942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.310948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.313590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.322121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.322783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.322824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.322846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.323439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.323903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.323912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.323919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.326605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.335064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.335676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.335719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.335741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.336332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.336811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.336821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.336827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.339510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.347961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.348625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.348667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.348689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.349110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.349369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.349382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.349392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.353443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.361170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.361921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.361964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.361986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.362588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.363034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.363048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.363055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.365732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.373990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.374731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.374774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.374795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.375095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.375269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.375278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.375284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.377937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.386898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.387628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.387671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.387692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.388053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.388242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.388252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.388258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.390910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.399877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.400761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.400808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.400831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.401152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.401326] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.401335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.401345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.403997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.412807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.413517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.413561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.413584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.414146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.414326] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.414336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.414343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.417037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.005 [2024-07-26 14:07:58.425616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.005 [2024-07-26 14:07:58.426272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.005 [2024-07-26 14:07:58.426316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.005 [2024-07-26 14:07:58.426337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.005 [2024-07-26 14:07:58.426897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.005 [2024-07-26 14:07:58.427066] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.005 [2024-07-26 14:07:58.427075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.005 [2024-07-26 14:07:58.427082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.005 [2024-07-26 14:07:58.429675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.438635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.439379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.439423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.439444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.440024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.440266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.440279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.440289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.444344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.452220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.452935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.452984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.453007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.453603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.454140] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.454150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.454156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.456831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.465030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.465736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.465751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.465758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.465920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.466090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.466116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.466123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.468787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.477892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.478630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.478673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.478694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.478980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.479169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.479180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.479187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.481846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.491015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.491757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.491800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.491821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.492291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.492468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.492478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.492484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.495225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.504011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.504746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.504788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.504810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.505223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.505397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.505407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.505413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.508065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.516900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.517623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.517665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.517686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.518279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.518493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.518502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.518508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.521157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.529914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.530651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.530695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.530716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.531064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.531254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.531264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.531270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.268 [2024-07-26 14:07:58.533922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.268 [2024-07-26 14:07:58.542770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.268 [2024-07-26 14:07:58.543507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.268 [2024-07-26 14:07:58.543550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.268 [2024-07-26 14:07:58.543571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.268 [2024-07-26 14:07:58.544115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.268 [2024-07-26 14:07:58.544289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.268 [2024-07-26 14:07:58.544299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.268 [2024-07-26 14:07:58.544305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.546959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.555668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.556375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.556418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.556440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.556638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.556801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.556811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.556816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.559500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.568455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.569192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.569235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.569257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.569493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.569656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.569666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.569672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.572355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.581311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.581973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.582015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.582059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.582504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.582677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.582687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.582693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.585326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.594127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.594830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.594872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.594894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.595337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.595511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.595521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.595528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.598172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.607032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.607771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.607813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.607834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.608300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.608473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.608483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.608489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.611134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.619882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.620619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.620663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.620684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.620882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.621052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.621064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.621086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.623749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.632801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.633528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.633569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.633590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.634182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.634627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.634636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.634643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.637283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.645627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.646356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.646398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.646420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.646999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.647555] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.647565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.647572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.650306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.658643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.659370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.659412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.659434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.660011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.660608] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.660634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.660655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.663289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.269 [2024-07-26 14:07:58.671481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.269 [2024-07-26 14:07:58.672207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.269 [2024-07-26 14:07:58.672261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.269 [2024-07-26 14:07:58.672283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.269 [2024-07-26 14:07:58.672701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.269 [2024-07-26 14:07:58.672865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.269 [2024-07-26 14:07:58.672874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.269 [2024-07-26 14:07:58.672880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.269 [2024-07-26 14:07:58.675566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.270 [2024-07-26 14:07:58.684305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.270 [2024-07-26 14:07:58.684963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.270 [2024-07-26 14:07:58.685005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.270 [2024-07-26 14:07:58.685026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.270 [2024-07-26 14:07:58.685619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.270 [2024-07-26 14:07:58.685938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.270 [2024-07-26 14:07:58.685947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.270 [2024-07-26 14:07:58.685954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.270 [2024-07-26 14:07:58.688588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.270 [2024-07-26 14:07:58.697366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.270 [2024-07-26 14:07:58.698108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.270 [2024-07-26 14:07:58.698152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.270 [2024-07-26 14:07:58.698174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.270 [2024-07-26 14:07:58.698752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.270 [2024-07-26 14:07:58.699178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.270 [2024-07-26 14:07:58.699188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.270 [2024-07-26 14:07:58.699194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.532 [2024-07-26 14:07:58.701977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.532 [2024-07-26 14:07:58.710419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.532 [2024-07-26 14:07:58.711147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.532 [2024-07-26 14:07:58.711191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.532 [2024-07-26 14:07:58.711215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.532 [2024-07-26 14:07:58.711592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.532 [2024-07-26 14:07:58.711757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.532 [2024-07-26 14:07:58.711766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.532 [2024-07-26 14:07:58.711772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.532 [2024-07-26 14:07:58.714463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.532 [2024-07-26 14:07:58.723390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.532 [2024-07-26 14:07:58.724031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.532 [2024-07-26 14:07:58.724088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.532 [2024-07-26 14:07:58.724111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.532 [2024-07-26 14:07:58.724690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.532 [2024-07-26 14:07:58.725057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.532 [2024-07-26 14:07:58.725084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.532 [2024-07-26 14:07:58.725093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.532 [2024-07-26 14:07:58.727698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.532 [2024-07-26 14:07:58.736390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.532 [2024-07-26 14:07:58.737126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.532 [2024-07-26 14:07:58.737144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.532 [2024-07-26 14:07:58.737152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.532 [2024-07-26 14:07:58.737328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.532 [2024-07-26 14:07:58.737506] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.532 [2024-07-26 14:07:58.737516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.532 [2024-07-26 14:07:58.737523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.532 [2024-07-26 14:07:58.740348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.532 [2024-07-26 14:07:58.749431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.532 [2024-07-26 14:07:58.750081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.532 [2024-07-26 14:07:58.750125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.532 [2024-07-26 14:07:58.750148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.532 [2024-07-26 14:07:58.750584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.532 [2024-07-26 14:07:58.750749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.532 [2024-07-26 14:07:58.750758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.532 [2024-07-26 14:07:58.750771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.532 [2024-07-26 14:07:58.753514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.532 [2024-07-26 14:07:58.762247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.532 [2024-07-26 14:07:58.762941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.532 [2024-07-26 14:07:58.762983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.532 [2024-07-26 14:07:58.763004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.532 [2024-07-26 14:07:58.763598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.532 [2024-07-26 14:07:58.763979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.532 [2024-07-26 14:07:58.763989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.532 [2024-07-26 14:07:58.763995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.532 [2024-07-26 14:07:58.766617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.532 [2024-07-26 14:07:58.775085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.532 [2024-07-26 14:07:58.775761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.532 [2024-07-26 14:07:58.775803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.532 [2024-07-26 14:07:58.775826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.776099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.776290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.776299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.776307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.778896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.788019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.788769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.788812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.788833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.789115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.789289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.789298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.789305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.791957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.800912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.801660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.801711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.801733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.802110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.802295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.802305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.802310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.804902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.813881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.814527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.814542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.814550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.814712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.814876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.814885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.814891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.817534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.826808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.827518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.827560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.827582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.827926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.828103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.828113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.828120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.830801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.839606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.840334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.840376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.840397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.840825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.840992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.841002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.841008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.843692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.852448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.853200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.853243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.853277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.853449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.853621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.853631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.853637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.856366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.865319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.866058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.866103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.866127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.866707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.867246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.867257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.867264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.869920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.878122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.878795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.878839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.878861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.879229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.879403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.879413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.879419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.882074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.891036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.891754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.891797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.891819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.892106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.533 [2024-07-26 14:07:58.892280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.533 [2024-07-26 14:07:58.892290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.533 [2024-07-26 14:07:58.892297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.533 [2024-07-26 14:07:58.894948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.533 [2024-07-26 14:07:58.903908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.533 [2024-07-26 14:07:58.904628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.533 [2024-07-26 14:07:58.904671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.533 [2024-07-26 14:07:58.904692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.533 [2024-07-26 14:07:58.905216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.534 [2024-07-26 14:07:58.905390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.534 [2024-07-26 14:07:58.905400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.534 [2024-07-26 14:07:58.905407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.534 [2024-07-26 14:07:58.908058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.534 [2024-07-26 14:07:58.916841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.534 [2024-07-26 14:07:58.917547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.534 [2024-07-26 14:07:58.917592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.534 [2024-07-26 14:07:58.917614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.534 [2024-07-26 14:07:58.918205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.534 [2024-07-26 14:07:58.918380] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.534 [2024-07-26 14:07:58.918390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.534 [2024-07-26 14:07:58.918396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.534 [2024-07-26 14:07:58.921054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.534 [2024-07-26 14:07:58.929719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.534 [2024-07-26 14:07:58.930467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.534 [2024-07-26 14:07:58.930509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.534 [2024-07-26 14:07:58.930538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.534 [2024-07-26 14:07:58.931054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.534 [2024-07-26 14:07:58.931244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.534 [2024-07-26 14:07:58.931253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.534 [2024-07-26 14:07:58.931260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.534 [2024-07-26 14:07:58.933911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.534 [2024-07-26 14:07:58.942564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.534 [2024-07-26 14:07:58.943335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.534 [2024-07-26 14:07:58.943377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.534 [2024-07-26 14:07:58.943399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.534 [2024-07-26 14:07:58.943978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.534 [2024-07-26 14:07:58.944211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.534 [2024-07-26 14:07:58.944222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.534 [2024-07-26 14:07:58.944228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.534 [2024-07-26 14:07:58.946882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.534 [2024-07-26 14:07:58.955476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.534 [2024-07-26 14:07:58.956207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.534 [2024-07-26 14:07:58.956249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.534 [2024-07-26 14:07:58.956271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.534 [2024-07-26 14:07:58.956451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.534 [2024-07-26 14:07:58.956614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.534 [2024-07-26 14:07:58.956624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.534 [2024-07-26 14:07:58.956630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.534 [2024-07-26 14:07:58.959318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.796 [2024-07-26 14:07:58.968527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.796 [2024-07-26 14:07:58.969128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.796 [2024-07-26 14:07:58.969171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.796 [2024-07-26 14:07:58.969194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.796 [2024-07-26 14:07:58.969772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.796 [2024-07-26 14:07:58.970356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.796 [2024-07-26 14:07:58.970369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.796 [2024-07-26 14:07:58.970376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.796 [2024-07-26 14:07:58.973121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.796 [2024-07-26 14:07:58.981488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.796 [2024-07-26 14:07:58.982220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.796 [2024-07-26 14:07:58.982264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.796 [2024-07-26 14:07:58.982285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.796 [2024-07-26 14:07:58.982520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.796 [2024-07-26 14:07:58.982682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.796 [2024-07-26 14:07:58.982692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.796 [2024-07-26 14:07:58.982698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.796 [2024-07-26 14:07:58.985386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.796 [2024-07-26 14:07:58.994347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.796 [2024-07-26 14:07:58.995034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.796 [2024-07-26 14:07:58.995054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.796 [2024-07-26 14:07:58.995062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.796 [2024-07-26 14:07:58.995234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.796 [2024-07-26 14:07:58.995407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.796 [2024-07-26 14:07:58.995416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.796 [2024-07-26 14:07:58.995423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.796 [2024-07-26 14:07:58.998256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.796 [2024-07-26 14:07:59.007310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.796 [2024-07-26 14:07:59.007963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.796 [2024-07-26 14:07:59.008005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.796 [2024-07-26 14:07:59.008027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.796 [2024-07-26 14:07:59.008330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.796 [2024-07-26 14:07:59.008505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.796 [2024-07-26 14:07:59.008515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.796 [2024-07-26 14:07:59.008521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.796 [2024-07-26 14:07:59.011261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.796 [2024-07-26 14:07:59.020333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.796 [2024-07-26 14:07:59.021019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.796 [2024-07-26 14:07:59.021073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.796 [2024-07-26 14:07:59.021095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.796 [2024-07-26 14:07:59.021590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.796 [2024-07-26 14:07:59.021763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.796 [2024-07-26 14:07:59.021773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.021779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.024456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.033248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.033916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.033959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.033981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.034383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.034547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.034556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.034563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.037178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.046139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.046794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.046836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.046859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.047452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.047799] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.047808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.047815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.050511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.059044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.059677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.059719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.059742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.060343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.060537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.060547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.060553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.063246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.071900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.072644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.072687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.072709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.073302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.073830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.073840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.073846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.076472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.084730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.085439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.085482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.085504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.085806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.085970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.085980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.085985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.088670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.097547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.098275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.098316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.098339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.098742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.098906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.098915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.098924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.101611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.110481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.111142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.111184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.111207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.111785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.111963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.111971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.111977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.114665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.123275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.124019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.124073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.124097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.124499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.124663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.124672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.797 [2024-07-26 14:07:59.124678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.797 [2024-07-26 14:07:59.127362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.797 [2024-07-26 14:07:59.136180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.797 [2024-07-26 14:07:59.136723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.797 [2024-07-26 14:07:59.136739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.797 [2024-07-26 14:07:59.136746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.797 [2024-07-26 14:07:59.136909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.797 [2024-07-26 14:07:59.137094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.797 [2024-07-26 14:07:59.137105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.137111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 [2024-07-26 14:07:59.139774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 [2024-07-26 14:07:59.149102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 [2024-07-26 14:07:59.149845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.149893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.149915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.150509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.150709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.150719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.150725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 [2024-07-26 14:07:59.153359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 [2024-07-26 14:07:59.162013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 [2024-07-26 14:07:59.162747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.162791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.162823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.162985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.163175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.163185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.163191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 [2024-07-26 14:07:59.165849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 [2024-07-26 14:07:59.174900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 [2024-07-26 14:07:59.175557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.175599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.175621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.175972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.176162] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.176172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.176178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 [2024-07-26 14:07:59.178838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 [2024-07-26 14:07:59.187799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 [2024-07-26 14:07:59.188512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.188556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.188578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.189032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.189228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.189238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.189244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3110530 Killed "${NVMF_APP[@]}" "$@" 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:31.798 [2024-07-26 14:07:59.191983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3111944 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3111944 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3111944 ']' 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.798 [2024-07-26 14:07:59.200966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 14:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:31.798 [2024-07-26 14:07:59.201713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.201729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.201737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.201913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.202096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.202106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.202113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 [2024-07-26 14:07:59.204937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 [2024-07-26 14:07:59.214122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 [2024-07-26 14:07:59.214840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.214857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.214864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.215040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.215227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.215236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.215243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.798 [2024-07-26 14:07:59.218075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.798 [2024-07-26 14:07:59.227252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.798 [2024-07-26 14:07:59.227909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.798 [2024-07-26 14:07:59.227926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:31.798 [2024-07-26 14:07:59.227934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:31.798 [2024-07-26 14:07:59.228116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:31.798 [2024-07-26 14:07:59.228295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.798 [2024-07-26 14:07:59.228305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.798 [2024-07-26 14:07:59.228312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.231143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.240311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.241053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.241070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.241078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.241256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.241441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.241451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.241457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.244220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.245078] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:32.060 [2024-07-26 14:07:59.245121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.060 [2024-07-26 14:07:59.253400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.254135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.254153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.254161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.254338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.254520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.254530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.254537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.257367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.266543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.267190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.267208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.267216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.267393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.267572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.267581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.267588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.270416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.060 [2024-07-26 14:07:59.279588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.280321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.280338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.280346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.280523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.280701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.280710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.280718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.283546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.292658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.293388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.293405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.293413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.293585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.293758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.293768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.293775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.296546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.303251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:32.060 [2024-07-26 14:07:59.305657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.306374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.306391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.306398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.306571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.306744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.306753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.306759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.309569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.318686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.319430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.319447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.319455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.319628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.319800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.319809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.060 [2024-07-26 14:07:59.319816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.060 [2024-07-26 14:07:59.322563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.060 [2024-07-26 14:07:59.331657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.060 [2024-07-26 14:07:59.332234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.060 [2024-07-26 14:07:59.332251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.060 [2024-07-26 14:07:59.332259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.060 [2024-07-26 14:07:59.332430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.060 [2024-07-26 14:07:59.332604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.060 [2024-07-26 14:07:59.332613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.332620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.335516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.344827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.345572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.345595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.345604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.345777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.345951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.345961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.345968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.348717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.357861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.358604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.358622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.358630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.358802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.358977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.358986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.358992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.361735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.370809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.371540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.371558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.371566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.371744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.371923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.371933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.371940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.374768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.383703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.061 [2024-07-26 14:07:59.383729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.061 [2024-07-26 14:07:59.383736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.061 [2024-07-26 14:07:59.383742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.061 [2024-07-26 14:07:59.383747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.061 [2024-07-26 14:07:59.383787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.061 [2024-07-26 14:07:59.383872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.061 [2024-07-26 14:07:59.383873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.061 [2024-07-26 14:07:59.383956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.384638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.384656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.384664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.384842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.385020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.385030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.385036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.387925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.397325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.398004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.398024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.398033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.398219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.398398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.398408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.398415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.401245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.410440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.411109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.411131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.411140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.411319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.411499] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.411509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.411516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.414346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.423541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.424278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.424307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.424316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.424496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.424675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.424684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.424691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.427525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.436731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.437491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.437510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.437518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.437697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.437876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.437886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.437892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.440723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.449910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.450492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.061 [2024-07-26 14:07:59.450510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.061 [2024-07-26 14:07:59.450518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.061 [2024-07-26 14:07:59.450696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.061 [2024-07-26 14:07:59.450876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.061 [2024-07-26 14:07:59.450886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.061 [2024-07-26 14:07:59.450894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.061 [2024-07-26 14:07:59.453723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.061 [2024-07-26 14:07:59.463079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.061 [2024-07-26 14:07:59.463679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.062 [2024-07-26 14:07:59.463696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.062 [2024-07-26 14:07:59.463703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.062 [2024-07-26 14:07:59.463881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.062 [2024-07-26 14:07:59.464069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.062 [2024-07-26 14:07:59.464079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.062 [2024-07-26 14:07:59.464086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.062 [2024-07-26 14:07:59.466909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.062 [2024-07-26 14:07:59.476264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.062 [2024-07-26 14:07:59.476855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.062 [2024-07-26 14:07:59.476872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.062 [2024-07-26 14:07:59.476880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.062 [2024-07-26 14:07:59.477062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.062 [2024-07-26 14:07:59.477240] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.062 [2024-07-26 14:07:59.477250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.062 [2024-07-26 14:07:59.477257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.062 [2024-07-26 14:07:59.480084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.062 [2024-07-26 14:07:59.489430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.062 [2024-07-26 14:07:59.490079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.062 [2024-07-26 14:07:59.490096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.062 [2024-07-26 14:07:59.490104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.062 [2024-07-26 14:07:59.490282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.062 [2024-07-26 14:07:59.490460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.062 [2024-07-26 14:07:59.490470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.062 [2024-07-26 14:07:59.490476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.062 [2024-07-26 14:07:59.493302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.323 [2024-07-26 14:07:59.502491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.323 [2024-07-26 14:07:59.503096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-07-26 14:07:59.503113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.323 [2024-07-26 14:07:59.503121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.323 [2024-07-26 14:07:59.503297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.323 [2024-07-26 14:07:59.503475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.323 [2024-07-26 14:07:59.503485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.323 [2024-07-26 14:07:59.503492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.323 [2024-07-26 14:07:59.506321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.323 [2024-07-26 14:07:59.515674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.323 [2024-07-26 14:07:59.516326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-07-26 14:07:59.516344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.323 [2024-07-26 14:07:59.516351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.323 [2024-07-26 14:07:59.516528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.323 [2024-07-26 14:07:59.516705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.323 [2024-07-26 14:07:59.516715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.323 [2024-07-26 14:07:59.516722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.323 [2024-07-26 14:07:59.519560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.528748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.529693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.529712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.529720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.529892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.530093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.530103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.530110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.532929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.541794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.542392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.542409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.542416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.542592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.542771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.542781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.542787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.545614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.554968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.555693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.555711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.555722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.555901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.556084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.556095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.556101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.558924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.568111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.568787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.568803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.568811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.568987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.569170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.569180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.569186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.572008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.581197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.581855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.581872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.581879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.582062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.582241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.582250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.582256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.585082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.594233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.594903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.594920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.594928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.595111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.595289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.595298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.595308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.598135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.607312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.608034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.608055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.608088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.608265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.608443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.608453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.608459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.611290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.620476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.621124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.621142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.621150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.621326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.621505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.621515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.621522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.624348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.633541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.634265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.634282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.634291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.634468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.634645] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.634655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.634661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.637490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.646674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.324 [2024-07-26 14:07:59.647537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-07-26 14:07:59.647554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.324 [2024-07-26 14:07:59.647562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.324 [2024-07-26 14:07:59.647740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.324 [2024-07-26 14:07:59.647917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.324 [2024-07-26 14:07:59.647926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.324 [2024-07-26 14:07:59.647932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.324 [2024-07-26 14:07:59.650766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.324 [2024-07-26 14:07:59.659783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.660429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.660446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.660454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.660630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.660808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.660817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.660824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.663653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.672836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.673509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.673526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.673533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.673711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.673889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.673899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.673905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.676731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.685915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.686510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.686527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.686535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.686716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.686895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.686905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.686912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.689742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.699099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.699689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.699706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.699714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.699891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.700074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.700084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.700090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.702914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.712268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.713149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.713167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.713175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.713352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.713529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.713538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.713545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.716374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.725397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.726015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.726032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.726039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.726222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.726399] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.726409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.726420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.729249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.738439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.739032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.739054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.739062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.739240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.739418] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.739428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.739435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.742265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.325 [2024-07-26 14:07:59.751635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.325 [2024-07-26 14:07:59.752400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-07-26 14:07:59.752417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.325 [2024-07-26 14:07:59.752425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.325 [2024-07-26 14:07:59.752602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.325 [2024-07-26 14:07:59.752780] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.325 [2024-07-26 14:07:59.752790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.325 [2024-07-26 14:07:59.752796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.325 [2024-07-26 14:07:59.755624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.764801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.765401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.765418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.765427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.765607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.765785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.765795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.765801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.768628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.777976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.778651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.778671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.778679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.778858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.779037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.779052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.779060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.781886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.791066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.791522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.791539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.791547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.791725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.791904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.791914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.791921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.794747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.804267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.804984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.805008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.805192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.805370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.805380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.805387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.808213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.817424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.817866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.817883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.817891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.818072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.818254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.818263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.818270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.821103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.830478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.831196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.831213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.831221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.831399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.831578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.831588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.831595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.834426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.843605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.844252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.844269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.844277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.844454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.844631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.587 [2024-07-26 14:07:59.844640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.587 [2024-07-26 14:07:59.844647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.587 [2024-07-26 14:07:59.847474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.587 [2024-07-26 14:07:59.856666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.587 [2024-07-26 14:07:59.857385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.587 [2024-07-26 14:07:59.857402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.587 [2024-07-26 14:07:59.857410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.587 [2024-07-26 14:07:59.857586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.587 [2024-07-26 14:07:59.857765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.857775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.857781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.860609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.869789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.870500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.870517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.870525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.870703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.870881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.870891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.870898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.873774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.882952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.883695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.883713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.883721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.883897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.884080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.884090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.884096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.886920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.896100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.896787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.896804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.896811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.896989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.897173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.897184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.897190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.900010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.909192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.909870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.909886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.909896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.910078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.910256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.910266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.910273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.913095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.922279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.922999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.923016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.923024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.923205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.923384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.923393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.923400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.926225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.935407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.936146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.936163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.936171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.936349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.936528] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.936537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.936544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.939372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.948544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.949216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.949234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.949242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.949419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.949598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.949611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.949618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.952441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.961614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.962338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.962356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.962363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.962540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.962718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.962728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.962736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.965562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.974735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.975411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.975428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.975436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.975613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.975792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.975802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.588 [2024-07-26 14:07:59.975809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.588 [2024-07-26 14:07:59.978634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.588 [2024-07-26 14:07:59.987812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.588 [2024-07-26 14:07:59.988424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.588 [2024-07-26 14:07:59.988441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.588 [2024-07-26 14:07:59.988449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.588 [2024-07-26 14:07:59.988628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.588 [2024-07-26 14:07:59.988807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.588 [2024-07-26 14:07:59.988817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.589 [2024-07-26 14:07:59.988825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.589 [2024-07-26 14:07:59.991651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.589 [2024-07-26 14:08:00.000989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.589 [2024-07-26 14:08:00.001732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.589 [2024-07-26 14:08:00.001748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.589 [2024-07-26 14:08:00.001757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.589 [2024-07-26 14:08:00.001934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.589 [2024-07-26 14:08:00.002119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.589 [2024-07-26 14:08:00.002129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.589 [2024-07-26 14:08:00.002136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.589 [2024-07-26 14:08:00.004963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.589 [2024-07-26 14:08:00.014144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.589 [2024-07-26 14:08:00.014579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.589 [2024-07-26 14:08:00.014596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.589 [2024-07-26 14:08:00.014605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.589 [2024-07-26 14:08:00.014782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.589 [2024-07-26 14:08:00.014960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.589 [2024-07-26 14:08:00.014970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.589 [2024-07-26 14:08:00.014976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.589 [2024-07-26 14:08:00.017803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.850 [2024-07-26 14:08:00.027732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.850 [2024-07-26 14:08:00.028419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.850 [2024-07-26 14:08:00.028438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.850 [2024-07-26 14:08:00.028446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.850 [2024-07-26 14:08:00.028626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.850 [2024-07-26 14:08:00.028804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.850 [2024-07-26 14:08:00.028815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.850 [2024-07-26 14:08:00.028822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.850 [2024-07-26 14:08:00.031655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.850 [2024-07-26 14:08:00.040836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.850 [2024-07-26 14:08:00.041515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.850 [2024-07-26 14:08:00.041532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.850 [2024-07-26 14:08:00.041540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.850 [2024-07-26 14:08:00.041722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.850 [2024-07-26 14:08:00.041901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.850 [2024-07-26 14:08:00.041910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.850 [2024-07-26 14:08:00.041918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.850 [2024-07-26 14:08:00.044746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.850 [2024-07-26 14:08:00.053921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.850 [2024-07-26 14:08:00.054675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.850 [2024-07-26 14:08:00.054693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.850 [2024-07-26 14:08:00.054700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.054877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.055062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.055072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.055079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.057899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.851 [2024-07-26 14:08:00.067083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.067738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.067755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.067763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.067941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.068128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.068138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.068144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.070974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 [2024-07-26 14:08:00.080162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.080801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.080818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.080826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.081009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.081194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.081204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.081211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.084035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.851 [2024-07-26 14:08:00.093225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.851 [2024-07-26 14:08:00.093911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.093929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.093937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.094119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.094298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.094308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.094315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.097145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 [2024-07-26 14:08:00.099352] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.851 [2024-07-26 14:08:00.106428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.107191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.107209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.107217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.107395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.107573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.107583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.107590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.110419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.851 [2024-07-26 14:08:00.119604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.120351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.120368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.120375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.120552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.120730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.120739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.120746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.123574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 [2024-07-26 14:08:00.132763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.133454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.133473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.133481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.133660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.133840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.133850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.133857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.136686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 Malloc0 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.851 [2024-07-26 14:08:00.145861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.146515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.146531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.146540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.146717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.851 [2024-07-26 14:08:00.146894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.851 [2024-07-26 14:08:00.146904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.851 [2024-07-26 14:08:00.146911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.851 [2024-07-26 14:08:00.149736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.851 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.851 [2024-07-26 14:08:00.158903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.851 [2024-07-26 14:08:00.159648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.851 [2024-07-26 14:08:00.159664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2980 with addr=10.0.0.2, port=4420 00:26:32.851 [2024-07-26 14:08:00.159671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2980 is same with the state(5) to be set 00:26:32.851 [2024-07-26 14:08:00.159848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2980 (9): Bad file descriptor 00:26:32.852 [2024-07-26 14:08:00.160025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.852 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.852 [2024-07-26 14:08:00.160035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.852 [2024-07-26 14:08:00.160049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.852 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.852 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.852 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.852 [2024-07-26 14:08:00.162870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.852 [2024-07-26 14:08:00.163513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.852 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.852 14:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3111013 00:26:32.852 [2024-07-26 14:08:00.172036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.852 [2024-07-26 14:08:00.214698] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:42.843 00:26:42.843 Latency(us) 00:26:42.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:42.843 Verification LBA range: start 0x0 length 0x4000 00:26:42.843 Nvme1n1 : 15.01 8032.60 31.38 12237.98 0.00 6294.50 1531.55 28038.01 00:26:42.843 =================================================================================================================== 00:26:42.843 Total : 8032.60 31.38 12237.98 0.00 6294.50 1531.55 28038.01 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.843 rmmod nvme_tcp 00:26:42.843 rmmod nvme_fabrics 00:26:42.843 rmmod nvme_keyring 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3111944 ']' 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3111944 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3111944 ']' 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3111944 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3111944 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3111944' 00:26:42.843 killing process with pid 3111944 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3111944 00:26:42.843 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3111944 00:26:42.843 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.843 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.844 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.844 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.844 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.844 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.844 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.844 14:08:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.785 00:26:43.785 real 0m26.031s 00:26:43.785 user 1m2.585s 00:26:43.785 sys 0m6.236s 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.785 ************************************ 00:26:43.785 END TEST nvmf_bdevperf 00:26:43.785 ************************************ 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.785 ************************************ 00:26:43.785 START TEST nvmf_target_disconnect 00:26:43.785 ************************************ 00:26:43.785 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:44.045 * Looking for test storage... 00:26:44.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.045 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.046 14:08:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:49.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:49.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.336 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:49.337 Found net devices under 0000:86:00.0: cvl_0_0 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:49.337 Found net devices under 0000:86:00.1: cvl_0_1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:26:49.337 00:26:49.337 --- 10.0.0.2 ping statistics --- 00:26:49.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.337 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:26:49.337 00:26:49.337 --- 10.0.0.1 ping statistics --- 00:26:49.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.337 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:49.337 ************************************ 00:26:49.337 START TEST nvmf_target_disconnect_tc1 00:26:49.337 ************************************ 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.337 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.337 [2024-07-26 14:08:16.679987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.337 [2024-07-26 14:08:16.680034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x500e60 with addr=10.0.0.2, port=4420 00:26:49.337 [2024-07-26 14:08:16.680060] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:49.337 [2024-07-26 14:08:16.680071] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:49.337 [2024-07-26 14:08:16.680078] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:49.337 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:49.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:49.337 Initializing NVMe Controllers 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:49.337 00:26:49.337 real 0m0.101s 00:26:49.337 user 0m0.045s 00:26:49.337 sys 0m0.056s 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:49.337 ************************************ 00:26:49.337 END TEST nvmf_target_disconnect_tc1 00:26:49.337 ************************************ 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:49.337 ************************************ 00:26:49.337 START TEST nvmf_target_disconnect_tc2 00:26:49.337 ************************************ 00:26:49.337 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3117089 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3117089 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3117089 ']' 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:49.338 14:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:49.598 [2024-07-26 14:08:16.808552] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:49.599 [2024-07-26 14:08:16.808596] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.599 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.599 [2024-07-26 14:08:16.870978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.599 [2024-07-26 14:08:16.972029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.599 [2024-07-26 14:08:16.972093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.599 [2024-07-26 14:08:16.972102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.599 [2024-07-26 14:08:16.972110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.599 [2024-07-26 14:08:16.972116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.599 [2024-07-26 14:08:16.972229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:49.599 [2024-07-26 14:08:16.972340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:49.599 [2024-07-26 14:08:16.972447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:49.599 [2024-07-26 14:08:16.972448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 Malloc0 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 [2024-07-26 14:08:17.668397] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 [2024-07-26 14:08:17.697350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3117129 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:50.540 14:08:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.540 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.454 14:08:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3117089 00:26:52.454 14:08:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Write completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Write completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Write completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.454 Read completed with error (sct=0, sc=8) 00:26:52.454 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 [2024-07-26 14:08:19.724815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 [2024-07-26 14:08:19.725025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Read completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 Write completed with error (sct=0, sc=8) 00:26:52.455 starting I/O failed 00:26:52.455 [2024-07-26 14:08:19.725222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.455 [2024-07-26 14:08:19.725725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.455 [2024-07-26 14:08:19.725744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.455 qpair failed and we were unable to recover it. 00:26:52.455 [2024-07-26 14:08:19.726216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.455 [2024-07-26 14:08:19.726251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.455 qpair failed and we were unable to recover it. 00:26:52.455 [2024-07-26 14:08:19.726803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.455 [2024-07-26 14:08:19.726834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.455 qpair failed and we were unable to recover it. 00:26:52.455 [2024-07-26 14:08:19.727313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.455 [2024-07-26 14:08:19.727346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.455 qpair failed and we were unable to recover it. 00:26:52.455 [2024-07-26 14:08:19.727698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.455 [2024-07-26 14:08:19.727729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.455 qpair failed and we were unable to recover it. 00:26:52.455 [2024-07-26 14:08:19.728082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.455 [2024-07-26 14:08:19.728114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.455 qpair failed and we were unable to recover it. 00:26:52.455 [2024-07-26 14:08:19.728638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.728669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.729140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.729173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.729608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.729638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.730072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.730104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.730614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.730645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.731219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.731251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.731709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.731740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.732214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.732225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.732685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.732716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.733217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.733249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.733843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.733874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.734430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.734462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.734879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.734910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.735456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.735488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.736054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.736087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.736581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.736612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.737207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.737240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.737680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.737712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.738120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.738151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.738418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.738449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.739023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.739062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.739605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.739637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.740183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.740214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.740773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.740810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.741352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.741368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.741821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.741836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.742310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.456 [2024-07-26 14:08:19.742325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.456 qpair failed and we were unable to recover it. 00:26:52.456 [2024-07-26 14:08:19.742791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.742806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.743354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.743369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.743901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.743916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.744316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.744331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.744888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.744903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.745315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.745348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.745830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.745861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.746358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.746389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.746877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.746908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.747398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.747429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.747890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.747921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.748431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.748462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.748942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.748973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.749460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.749493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.749987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.750018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.750610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.750642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.751188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.751220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.751717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.751748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.752238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.752270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.752699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.752731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.753217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.753256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.753766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.753797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.754290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.754322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.754936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.754967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.755451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.755483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.756062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.756094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.756598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.756629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.757183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.757216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.757762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.757793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.758373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.758406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.758900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.758932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.759417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.759448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.760012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.760052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.760546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.760577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.761007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.457 [2024-07-26 14:08:19.761037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.457 qpair failed and we were unable to recover it. 00:26:52.457 [2024-07-26 14:08:19.761603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.761635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.762140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.762178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.762686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.762718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.763211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.763244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.763788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.763819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.764360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.764392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.764946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.764977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.765526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.765558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.766064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.766097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.766590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.766622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.767141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.767173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.767721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.767752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.768308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.768339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.768902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.768932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.769474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.769506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.770025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.770069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.770612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.770643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.771207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.771240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.771822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.771853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.772341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.772373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.772888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.772919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.773405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.773436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.773978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.774010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.774610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.774642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.775183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.775216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.775712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.775743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.458 qpair failed and we were unable to recover it. 00:26:52.458 [2024-07-26 14:08:19.776230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.458 [2024-07-26 14:08:19.776262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.776691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.776721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.777219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.777251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.777744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.777775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.778336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.778368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.778854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.778885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.779399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.779431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.779925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.779957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.780525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.780557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.781102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.781134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.781725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.781756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.782236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.782267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.782744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.782775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.783280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.783312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.783856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.783888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.784381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.784439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.785026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.785075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.785564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.785594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.786085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.786117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.786606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.786637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.787145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.787178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.787696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.787727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.788266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.788298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.788856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.788888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.789143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.789679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.789711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.790232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.790271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.790810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.790841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.791408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.791440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.791992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.792023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.792542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.792574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.793138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.793153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.793633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.793664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.459 [2024-07-26 14:08:19.794148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.459 [2024-07-26 14:08:19.794180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.459 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.794693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.794724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.795265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.795296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.795786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.795817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.796357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.796389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.796939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.796970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.797446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.797478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.797996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.798026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.798296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.798328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.798645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.798676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.799241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.799273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.799772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.799803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.800373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.800406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.800957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.800987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.801450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.801482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.802025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.802065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.802328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.802359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.802924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.802955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.803466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.803498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.803979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.804009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.804503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.804534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.805102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.805135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.805679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.805715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.806254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.806270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.806716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.806747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.807186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.807218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.807808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.807839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.808396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.808427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.808990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.809021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.809603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.460 [2024-07-26 14:08:19.809635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.460 qpair failed and we were unable to recover it. 00:26:52.460 [2024-07-26 14:08:19.810179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.810210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.810790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.810822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.811089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.811121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.811683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.811714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.812208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.812240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.812718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.812749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.813316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.813332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.813796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.813828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.814280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.814312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.814873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.814904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.815345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.815385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.815895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.815910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.816233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.816266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.816833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.816865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.817351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.817383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.817971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.818002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.818512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.818543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.818806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.818837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.819379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.819411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.819907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.819939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.820422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.820454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.820966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.820997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.821548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.821579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.822143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.822175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.822668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.822699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.823200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.823232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.823726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.823757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.824293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.824308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.824834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.824849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.825318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.825351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.825910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.825941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.826505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.826537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.827102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.827139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.827641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.827672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.828216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.828248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.828818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.828848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.461 [2024-07-26 14:08:19.829336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.461 [2024-07-26 14:08:19.829367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.461 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.829859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.829890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.830482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.830514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.831063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.831095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.831646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.831677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.832218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.832250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.832803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.832834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.833368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.833400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.833946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.833977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.834528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.834560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.835086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.835118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.835680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.835710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.836215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.836247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.836836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.836867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.837436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.837468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.838053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.838085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.838611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.838643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.839150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.839183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.839615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.839646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.840082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.840114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.840655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.840686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.841228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.841260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.841810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.841840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.842437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.842469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.842966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.842997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.843500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.843533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.844102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.844134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.844332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.844362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.844853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.844883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.845234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.845250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.845763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.845794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.846303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.846335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.846891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.846922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.847492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.847524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.848065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.848098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.848669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.848700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.849211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.849248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.849816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.462 [2024-07-26 14:08:19.849847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.462 qpair failed and we were unable to recover it. 00:26:52.462 [2024-07-26 14:08:19.850358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.850390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.850832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.850863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.851290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.851321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.851742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.851755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.852284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.852317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.852796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.852827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.853314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.853329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.853852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.853884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.854392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.854424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.854935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.854966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.855526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.855558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.856009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.856039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.856608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.856621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.857073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.857088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.857625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.857656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.858222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.858254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.858804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.858818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.859283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.859315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.859629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.859660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.860153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.860185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.860670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.860700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.861194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.861226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.861504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.861534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.862037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.862075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.862589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.862629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.863110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.863141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.863629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.863660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.864243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.864275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.864835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.864850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.865309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.865324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.865767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.865782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.866246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.866278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.866713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.867284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.867315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.867790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.867820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.868385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.868416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.868912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.868942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.463 [2024-07-26 14:08:19.869431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.463 [2024-07-26 14:08:19.869463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.463 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.870050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.870089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.870603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.870634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.871170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.871185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.871636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.871667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.872160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.872192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.872551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.872581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.873150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.873182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.873682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.873712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.874230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.874262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.874770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.874801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.875408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.875439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.875942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.875972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.876462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.876493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.877064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.877096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.877600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.877631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.878139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.878172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.878662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.878692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.879206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.879238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.879803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.879818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.880337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.880369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.880874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.880904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.881446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.881478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.881993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.882023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.882595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.882611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.883069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.883102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.883581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.883613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.884173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.884205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.884777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.884792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.464 [2024-07-26 14:08:19.885305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.464 [2024-07-26 14:08:19.885321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.464 qpair failed and we were unable to recover it. 00:26:52.734 [2024-07-26 14:08:19.885786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.734 [2024-07-26 14:08:19.885819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.734 qpair failed and we were unable to recover it. 00:26:52.734 [2024-07-26 14:08:19.886314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.734 [2024-07-26 14:08:19.886347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.734 qpair failed and we were unable to recover it. 00:26:52.734 [2024-07-26 14:08:19.886800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.734 [2024-07-26 14:08:19.886831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.734 qpair failed and we were unable to recover it. 00:26:52.734 [2024-07-26 14:08:19.887311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.734 [2024-07-26 14:08:19.887326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.734 qpair failed and we were unable to recover it. 00:26:52.734 [2024-07-26 14:08:19.887818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.734 [2024-07-26 14:08:19.887834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.734 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.888301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.888322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.888830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.888845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.889376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.889408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.889952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.889983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.890592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.890623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.891132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.891164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.891660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.891677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.892158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.892189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.892685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.892716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.893281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.893325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.893788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.893819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.894332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.894363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.894914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.894944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.895461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.895493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.896033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.896074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.896585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.896616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.897159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.897192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.897734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.897765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.898189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.898221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.898781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.898812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.899419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.899451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.900000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.900031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.900547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.900578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.901168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.901201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.901794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.901809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.902266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.902281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.902794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.902825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.903318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.903350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.903835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.903866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.904433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.904467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.905036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.905058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.905570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.905602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.906096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.906129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.906624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.906656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.907229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.907261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.907748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.907779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.908322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.735 [2024-07-26 14:08:19.908354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.735 qpair failed and we were unable to recover it. 00:26:52.735 [2024-07-26 14:08:19.908800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.908831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.909374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.909406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.909956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.909987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.910483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.910515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.910791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.910821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.911388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.911420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.911929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.911960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.912524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.912555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.913076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.913108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.913651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.913687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.914252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.914284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.914780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.914810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.915303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.915336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.915904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.915935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.916379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.916412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.916954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.916986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.917581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.917613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.918156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.918187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.918673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.918705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.919194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.919227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.919731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.919762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.920202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.920234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.920718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.920733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.921190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.921205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.921737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.921768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.922273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.922306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.922794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.922825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.923401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.923432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.923920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.736 [2024-07-26 14:08:19.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.736 qpair failed and we were unable to recover it. 00:26:52.736 [2024-07-26 14:08:19.924439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.924471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.925040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.925078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.925598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.925646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.926174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.926206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.926776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.926808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.927312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.927344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.927906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.927937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.928485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.928517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.929006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.929036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.929595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.929626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.930117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.930149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.930590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.930621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.931187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.931220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.931649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.931679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.932175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.932207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.932694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.932725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.933244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.933277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.933831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.933861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.934405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.934437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.935008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.935038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.935533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.935570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.936134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.936167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.936758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.936788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.937273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.937312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.937820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.937850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.938413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.938445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.938935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.938966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.939475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.939507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.940075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.940107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.940642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.940673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.941164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.941196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.941781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.941811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.942373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.942405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.942972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.943002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.943511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.943543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.944108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.944140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.944707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.737 [2024-07-26 14:08:19.944739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.737 qpair failed and we were unable to recover it. 00:26:52.737 [2024-07-26 14:08:19.945211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.945243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.945834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.945864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.946360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.946392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.946958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.946990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.947564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.947596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.948088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.948120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.948601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.948632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.949147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.949178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.949686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.949718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.950282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.950315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.950800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.950832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.951367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.951400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.951963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.951993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.952492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.952524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.953095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.953128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.953613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.953644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.954126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.954158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.954641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.954671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.955167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.955199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.955786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.955816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.956384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.956415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.956910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.956940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.957483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.957516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.958086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.958117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.958636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.958667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.959169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.959201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.959761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.959792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.960333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.960364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.960846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.960878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.738 [2024-07-26 14:08:19.961371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.738 [2024-07-26 14:08:19.961403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.738 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.961892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.961922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.962452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.962485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.963056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.963088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.963651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.963682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.964186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.964218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.964702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.964733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.965312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.965344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.965855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.965886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.966346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.966379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.966881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.966912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.967224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.967256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.967707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.967737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.968227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.968259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.968742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.968772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.969337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.969352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.969884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.969914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.970540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.970572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.971173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.971205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.971721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.971751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.972309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.972340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.972933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.972972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.973542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.973575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.974088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.974121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.974684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.974715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.975250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.975282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.975842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.975857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.976328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.976343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.976856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.976871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.977422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.977438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.977761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.977776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.739 [2024-07-26 14:08:19.978242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.739 [2024-07-26 14:08:19.978273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.739 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.978835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.978850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.979322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.979338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.979842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.979857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.980316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.980331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.980829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.980843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.981389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.981404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.981983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.981998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.982537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.982552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.983106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.983122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.983699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.983714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.984274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.984290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.984771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.984787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.985308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.985324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.985896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.985911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.986442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.986458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.986948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.986962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.987496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.987512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.988012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.988027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.988571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.988587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.989141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.989156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.989621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.989636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.990168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.990184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.990747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.990761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.991312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.991327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.991777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.991792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.992259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.992274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.992822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.992837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.993367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.993383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.993904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.993919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.994470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.994489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.994945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.994960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.995485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.995500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.996029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.996048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.996570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.740 [2024-07-26 14:08:19.996585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.740 qpair failed and we were unable to recover it. 00:26:52.740 [2024-07-26 14:08:19.997039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:19.997060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:19.997577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:19.997593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:19.998060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:19.998076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:19.998646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:19.998661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:19.999221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:19.999253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:19.999752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:19.999767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.000284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.000300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.000705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.000721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.001216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.001231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.001786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.001801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.002278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.002294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.002837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.002852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.003406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.003422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.003941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.003956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.004468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.004484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.005020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.005034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.005500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.005516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.006085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.006101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.006562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.006577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.007041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.007061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.007528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.007543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.008081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.008097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.008608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.008624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.009095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.009110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.009589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.009604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.010160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.010176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.010684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.010699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.011265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.011281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.011739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.011754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.012309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.012325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.012811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.012826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.013310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.013326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.013812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.013828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.014334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.014350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.014869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.014884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.015345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.015363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.015847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.015862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.741 qpair failed and we were unable to recover it. 00:26:52.741 [2024-07-26 14:08:20.016414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.741 [2024-07-26 14:08:20.016430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.016986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.017001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.017529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.017545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.018097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.018113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.018649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.018665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.019154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.019170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.019809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.019824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.023545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.023583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.024039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.024068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.024525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.024541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.025073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.025089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.025640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.025655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.026143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.026159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.026714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.026730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.027304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.027320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.027806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.027821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.028377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.028393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.028963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.028978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.029558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.029574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.030103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.030119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.030675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.030690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.031228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.031244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.031782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.031797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.032278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.032294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.032850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.032865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.033347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.033363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.033834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.033849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.034392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.034408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.034867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.034883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.035434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.035450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.036008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.036023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.036576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.036592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.037124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.037140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.037694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.037709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.038253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.038269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.038727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.038743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.039275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.039291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.039753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.742 [2024-07-26 14:08:20.039768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.742 qpair failed and we were unable to recover it. 00:26:52.742 [2024-07-26 14:08:20.040234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.040253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.040817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.040832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.041415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.041431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.041987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.042001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.042524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.042540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.043059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.043074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.043654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.044185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.044201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.044716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.044731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.045190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.045206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.045680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.045695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.046223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.046239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.046746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.046762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.047288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.047303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.047843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.047859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.048398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.048414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.048950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.048965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.049517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.049532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.050039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.050061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.050540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.050555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.051063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.051078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.051623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.051638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.052115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.052131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.052662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.052677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.053230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.053245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.053697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.053713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.054107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.054122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.054653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.054668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.055214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.055230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.055678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.055693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.056242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.056257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.056720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.056735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.057261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.057276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.057732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.057746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.058278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.058294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.058871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.058885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.059467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.059482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.743 qpair failed and we were unable to recover it. 00:26:52.743 [2024-07-26 14:08:20.060001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.743 [2024-07-26 14:08:20.060016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.060531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.060546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.061012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.061027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.061544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.061562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.062027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.062047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.062557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.063141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.063157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.063664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.063679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.064178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.064194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.064749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.064764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.065347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.065362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.065938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.065953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.066344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.066359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.066831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.066846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.067357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.067373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.067909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.067924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.068452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.068469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.069002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.069017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.069545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.069561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.070061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.070077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.070537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.070552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.071121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.071137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.071691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.071706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.072260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.072275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.072730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.072745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.073272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.073288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.073816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.073830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.074368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.074384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.074850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.074864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.075393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.075409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.075900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.075916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.744 [2024-07-26 14:08:20.076370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.744 [2024-07-26 14:08:20.076386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.744 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.076916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.076931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.077492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.077508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.078065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.078081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.078638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.078653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.079224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.079240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.079799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.079814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.080321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.080337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.080799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.080814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.081347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.081363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.081773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.081804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.082314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.082346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.082906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.083460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.083492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.083990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.084022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.084630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.084662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.085249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.085281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.085873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.085904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.086464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.086496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.087065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.087097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.087608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.087639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.088127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.088159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.088730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.088761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.089344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.089376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.089979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.090010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.090577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.090610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.091114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.091148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.091727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.091758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.092370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.092402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.092983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.093014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.093550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.093582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.094071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.094104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.094647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.094678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.095258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.095290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.095826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.095857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.096350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.096396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.096910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.096941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.097485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.745 [2024-07-26 14:08:20.097518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.745 qpair failed and we were unable to recover it. 00:26:52.745 [2024-07-26 14:08:20.098105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.098137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.098733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.098764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.099344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.099377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.099951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.099982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.100558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.100590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.101166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.101198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.101762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.101793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.102302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.102334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.102902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.102933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.103499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.103532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.104121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.104153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.104719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.104750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.105284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.105316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.105803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.105840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.106376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.106395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.106955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.106969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.107431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.107464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.108040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.108084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.108631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.108662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.109228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.109260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.109850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.109881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.110418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.110450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.111011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.111054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.111626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.111657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.112250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.112282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.112798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.112829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.113376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.113408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.114005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.114036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.114628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.114659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.115245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.115277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.115874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.115907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.116443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.116477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.117039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.117080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.117684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.117715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.118292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.118324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.118872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.118903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.119495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.119527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.746 qpair failed and we were unable to recover it. 00:26:52.746 [2024-07-26 14:08:20.120037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.746 [2024-07-26 14:08:20.120078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.120643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.120674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.121263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.121295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.121889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.121920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.122493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.122527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.123131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.123164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.123751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.123782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.124371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.124403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.124993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.125025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.125619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.125650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.126249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.126281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.126872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.126903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.127493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.127525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.128143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.128175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.128784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.128816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.129404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.129437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.130023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.130064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.130545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.130581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.131153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.131185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.131698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.131728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.132226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.132258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.132835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.132866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.133454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.133486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.134061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.134094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.134655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.134686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.135226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.135242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.135786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.135817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.136309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.136341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.136842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.136873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.137398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.137431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.137945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.137976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.138553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.138584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.139172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.139188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.139652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.139667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.140223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.140256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.140842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.140873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.141455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.141487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.142037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.747 [2024-07-26 14:08:20.142078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.747 qpair failed and we were unable to recover it. 00:26:52.747 [2024-07-26 14:08:20.142674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.142705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.143205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.143238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.143760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.143791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.144377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.144409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.145002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.145033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.145628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.145659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.146250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.146283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.146788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.146820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.147395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.147428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.148013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.148052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.148637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.148668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.149219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.149252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.149866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.149898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.150437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.150467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.151024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.151066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.151588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.151619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.152183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.152216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.152783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.152814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.153329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.153362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.153930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.153966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.154573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.154589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.155054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.155087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.155675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.155706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.156273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.156306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.156807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.156838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.157402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.157419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.157895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.157911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.158371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.158403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.158970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.159002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.159530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.159546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:52.748 [2024-07-26 14:08:20.160077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.748 [2024-07-26 14:08:20.160094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:52.748 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.160608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.160641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.161092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.161127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.161639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.161671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.162112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.162145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.162646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.162678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.163259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.163291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.163856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.163887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.164464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.164496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.165030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.165072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.165663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.165694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.166265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.166297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.166870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.166901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.167414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.167430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.167842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.167871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.168452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.168468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.169034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.169077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.169673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.169688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.170106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.170136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.170644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.170676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.171291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.171306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.171870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.171885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.172448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.172481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.172971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.173002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.173498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.173514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.173971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.173986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.174452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.174485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.175008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.175023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.175507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.175523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.176578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.176611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.177181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.059 [2024-07-26 14:08:20.177199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.059 qpair failed and we were unable to recover it. 00:26:53.059 [2024-07-26 14:08:20.177749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.177782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.178363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.178379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.178950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.178965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.179438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.179454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.179986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.180016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.180472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.180506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.181040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.181086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.181590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.181621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.182145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.182178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.182757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.182788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.183370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.183403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.183926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.183958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.184452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.184485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.185070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.185103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.185597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.185629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.186147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.186180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.186761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.186793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.187255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.187287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.187794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.187810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.188273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.188290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.188779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.188810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.189326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.189342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.189795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.189810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.190297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.190330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.190835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.190866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.191365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.191381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.191852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.191884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.192397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.192415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.192868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.192884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.193438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.193471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.194072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.194106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.194605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.194636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.195167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.195200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.195721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.195752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.196248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.196280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.196771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.196802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.060 [2024-07-26 14:08:20.197374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.060 [2024-07-26 14:08:20.197390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.060 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.197835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.197851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.198404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.198443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.199054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.199070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.199555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.199570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.200067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.200100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.200706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.200738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.201234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.201250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.201607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.201638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.202225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.202258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.202833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.202865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.203489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.203522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.204110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.204142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.204662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.204693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.205193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.205225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.205726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.205756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.206320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.206353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.206910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.206941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.207466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.207481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.207977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.208008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.208596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.208629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.209195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.209227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.209803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.209834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.210411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.210444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.210975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.211006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.211609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.211641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.212173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.212205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.212771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.212802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.213413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.213446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.214056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.214089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.214569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.214601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.215176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.215209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.215798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.215830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.216424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.216457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.216984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.217020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.217722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.217801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.218414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.218455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.219004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.061 [2024-07-26 14:08:20.219037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.061 qpair failed and we were unable to recover it. 00:26:53.061 [2024-07-26 14:08:20.219567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.219601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.220174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.220207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.220716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.220748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.221340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.221373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.221903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.221945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.222512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.222545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.223141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.223175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.223747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.223779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.224363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.224380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.224911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.224942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.225457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.225490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.226071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.226111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.226619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.226655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.227214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.227232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.227756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.227772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.228292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.228314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.228842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.228878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.229407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.229425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.229977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.230013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.230581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.230617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.231231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.231266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.231848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.231867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.232325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.232344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.232899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.232917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.233510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.233546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.234148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.234184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.234776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.234795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.235339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.235357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.235948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.235983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.236503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.236539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.237119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.237138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.237658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.237676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.238213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.238231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.238844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.238866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.239462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.239480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.240066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.240085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.240691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.062 [2024-07-26 14:08:20.240709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.062 qpair failed and we were unable to recover it. 00:26:53.062 [2024-07-26 14:08:20.241229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.241248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.241837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.241855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.242419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.242437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.242902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.242920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.243505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.243523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.244016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.244034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.244564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.244582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.245137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.245158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.245736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.245754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.246286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.246321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.246878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.246912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.247522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.247540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.248072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.248089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.248664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.248703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.249305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.249349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.249939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.249957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.250439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.250479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.251079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.251115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.251700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.251718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.252298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.252316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.252891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.252929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.253580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.253618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.254204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.254241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.254788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.254823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.255399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.255416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.255907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.255925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.256429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.256447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.257034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.257061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.257565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.257582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.258132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.258151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.258669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.063 [2024-07-26 14:08:20.258687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.063 qpair failed and we were unable to recover it. 00:26:53.063 [2024-07-26 14:08:20.259287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.259305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.259860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.259877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.260443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.260461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.261058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.261085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.261560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.261578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.262118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.262137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.262556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.262574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.263106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.263142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.263746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.263766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.264348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.264366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.264913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.264931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.265497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.265515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.266061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.266079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.266629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.266647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.267263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.267298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.267862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.267893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.268493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.268526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.269035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.269082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.269677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.269708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.270218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.270252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.270827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.270859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.271465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.271499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.272107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.272140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.272734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.272765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.273391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.273424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.274007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.274039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.274562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.274595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.275068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.275102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.275682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.275714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.276230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.276263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.276827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.276860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.277437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.277471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.278070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.278104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.278629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.278661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.279256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.279290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.279886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.279916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.280505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.280539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.064 qpair failed and we were unable to recover it. 00:26:53.064 [2024-07-26 14:08:20.281140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.064 [2024-07-26 14:08:20.281172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.281770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.281803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.282305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.282338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.282912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.282944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.283537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.283571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.284164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.284199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.284796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.284834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.285388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.285421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.285996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.286027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.286654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.286686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.287286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.287320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.287908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.287940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.288516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.288550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.289201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.289234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.289840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.289872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.290382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.290416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.290984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.291015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.291603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.291637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.292215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.292249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.292757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.292789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.293402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.294027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.294067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.294652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.294684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.295288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.295321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.295913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.295945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.296559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.296592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.297171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.297204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.297796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.297830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.298387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.298421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.299010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.299059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.299638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.299670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.300274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.300308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.300855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.300885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.301502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.301535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.302072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.302105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.302681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.302713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.303299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.303332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.303830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.065 [2024-07-26 14:08:20.303861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.065 qpair failed and we were unable to recover it. 00:26:53.065 [2024-07-26 14:08:20.304395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.304444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.305026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.305081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.305674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.305707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.306296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.306329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.306948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.306980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.307578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.307616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.308119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.308153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.308735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.308766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.309290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.309329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.309937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.309969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.310524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.310557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.311188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.311222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.311824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.311855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.312446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.312480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.313073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.313106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.313699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.313730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.314252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.314285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.314898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.314931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.315512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.315545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.316117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.316151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.316781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.316812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.317418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.317451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.318020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.318036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.318594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.318627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.319227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.319260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.319856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.319888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.320487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.320520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.321125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.321159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.321684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.321716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.322312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.322345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.322944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.322976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.323493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.323526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.324105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.324139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.324732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.324764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.325294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.325326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.325909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.325942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.326546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-07-26 14:08:20.326579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.066 qpair failed and we were unable to recover it. 00:26:53.066 [2024-07-26 14:08:20.327182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.327216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.327812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.327843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.328440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.328474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.328998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.329028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.329619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.329654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.330175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.330208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.330797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.330829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.331429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.331462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.332031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.332075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.332600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.332632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.333230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.333264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.333859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.333897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.334430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.334462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.335060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.335094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.335716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.335748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.336319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.336353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.336920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.336951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.337720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.337804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.338465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.338506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.339064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.339099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.339740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.339772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.340370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.340404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.340991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.341023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.341629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.341665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.342184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.342218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.342831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.342862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.343499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.343533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.344109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.344142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.344656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.344688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.345254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.345271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.345850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.345883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.346457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.346491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.347002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.347033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.347654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.347687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.348287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.348320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.348915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.348947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.349546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.349579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.067 [2024-07-26 14:08:20.350154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-07-26 14:08:20.350187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.067 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.350779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.350811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.351327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.351360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.351926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.351957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.352504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.352536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.353070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.353104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.353659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.353692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.354276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.354311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.354906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.354938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.355448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.355481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.356060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.356092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.356690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.356722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.357327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.357360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.357872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.357903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.358472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.358513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.359125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.359157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.359760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.359792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.360396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.360430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.361066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.361098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.361693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.361728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.362331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.362365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.362858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.362890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.363472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.363506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.364102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.364134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.364725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.364757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.365355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.365388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.365962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.365993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.366567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.366601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.367157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.367190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.367795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.367826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.368416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.368449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.369031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.369073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.369672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-07-26 14:08:20.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.068 qpair failed and we were unable to recover it. 00:26:53.068 [2024-07-26 14:08:20.370293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.370326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.370915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.370947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.371562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.371596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.372199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.372232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.372871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.372904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.373431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.373463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.373977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.374009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.374647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.374681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.375292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.375327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.375900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.375931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.376408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.376441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.377026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.377068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.377674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.377705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.378296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.378330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.378907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.378940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.379544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.379578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.380200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.380232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.380770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.380802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.381378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.381412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.381930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.381962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.382544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.382578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.383170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.383215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.383799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.383831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.384432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.384465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.385036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.385077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.385654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.385685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.386212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.386228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.386802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.386834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.387461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.387494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.388099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.388132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.388727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.388759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.389361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.389395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.389918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.389948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.390528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.390561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.391133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.391168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.391741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.391774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.392350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.392383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.069 qpair failed and we were unable to recover it. 00:26:53.069 [2024-07-26 14:08:20.392970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.069 [2024-07-26 14:08:20.393002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.393595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.393630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.394230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.394263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.394857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.394889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.395488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.395522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.396081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.396115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.396623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.396655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.397224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.397257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.397772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.397804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.398364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.398397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.399012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.399061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.399676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.399708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.400266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.400299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.400831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.400865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.401452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.401485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.402079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.402113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.402704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.402736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.403260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.403293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.403900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.403932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.404558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.404591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.405110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.405143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.405713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.405745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.406351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.406384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.406993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.407024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.407650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.407688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.408301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.408351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.408976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.409007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.409590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.409624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.410227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.410260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.410771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.410804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.411363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.411396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.411900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.411932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.412381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.412414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.412916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.412947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.413560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.413593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.414213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.414245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.414847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.414880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.415475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-07-26 14:08:20.415507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-07-26 14:08:20.416085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.416120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.416723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.416755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.417395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.417412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.417944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.417976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.418515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.418547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.419131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.419164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.419739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.419771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.420355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.420390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.420976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.421008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.421638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.421671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.422251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.422285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.422831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.422862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.423468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.423502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.424094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.424129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.424732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.424764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.425325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.425359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.425935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.425968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.426564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.426597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.427174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.427208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.427805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.427837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.428440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.428474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.429062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.429095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.429549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.429582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.430164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.430197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.430791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.430822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.431334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.431368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.431961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.431998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.432565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.433113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.433147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.433715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.433747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.434359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.434393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.434923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.434956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.435481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.435514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.436121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.436154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.436739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.436771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.437342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.437375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.437953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.437984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-07-26 14:08:20.438585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-07-26 14:08:20.438619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.439180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.439213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.439741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.439773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.440337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.440371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.440996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.441027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.441567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.441599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.442170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.442204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.442803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.442835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.443457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.443490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.444084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.444117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.444732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.444764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.445401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.445434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.446055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.446088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.446660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.446692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.447287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.447322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.447918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.447950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.448556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.448590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.449181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.449214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.449815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.449847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.450428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.450462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.450980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.451012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.451612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.451645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.452203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.452237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.452857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.452889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.453488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.453521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.454140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.454173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.454686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.454717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.455300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.455333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.455921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.455953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.456534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.456573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.457088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.457121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.457640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.457672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.458261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.458294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.458824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.458856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.459370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.459403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.460008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.460039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.460659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.460691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.461277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-07-26 14:08:20.461311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-07-26 14:08:20.461911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.461944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.462546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.462579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.463106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.463139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.463756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.463788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.464287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.464303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.464853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.464885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.465471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.465504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.466109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.466143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.466731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.466762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.467377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.467410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.467988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.468020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.468629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.468664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.469168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.469202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.469803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.469834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.470368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.470401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.470973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.471004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.471620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.471652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.472252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.472269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.472743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.472776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.473298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.473332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.473835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.473867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.474445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.474478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.475019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.475060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.475566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.475598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.476057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.476079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.476550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.476568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.477058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.477077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.477603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.477621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.478220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.478254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.478755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.478774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.479558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.479577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.480159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.480202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-07-26 14:08:20.480841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-07-26 14:08:20.480877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.481476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.481513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.482102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.482140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.482714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.482750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.483379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.483398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.484022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.484040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.484618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.484637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.485181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.485200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.485675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.485692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.486203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.486222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.486641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.486660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.487273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.487291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.487757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.487775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-07-26 14:08:20.488248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-07-26 14:08:20.488266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.488830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.488851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.489436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.489457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.490034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.490065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.490552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.490587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.491189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.491208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.491731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.491749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.492236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.492255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.492801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.492819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.493326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.493349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.342 [2024-07-26 14:08:20.493869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.342 [2024-07-26 14:08:20.493888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.342 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.494470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.494488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.495024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.495050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.495627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.495645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.496122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.496141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.496616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.496637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.497109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.497130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.497640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.497662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.498251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.498269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.498777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.498811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.499356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.499374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.499951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.500540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.500558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.501039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.501071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.501631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.501649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.502119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.502139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.502609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.503177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.503796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.503832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.504441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.504459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.504985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.505023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.505567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.505585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.506011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.506029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.506570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.506588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.507088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.507106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.507589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.507608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.508178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.508197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.508681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.508700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.509227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.509246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.509814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.509832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.510396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.510415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.510958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.510976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.511524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.511543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.512027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.512055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.512698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.513191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.513209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.513689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.513725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.343 [2024-07-26 14:08:20.514306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.343 [2024-07-26 14:08:20.514327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.343 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.514825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.514845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.515317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.515335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.515874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.515892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.516485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.516504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.517013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.517031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.517620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.517639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.518191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.518210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.518684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.518702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.519233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.519270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.519797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.519833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.520446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.520483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.521084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.521101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.521583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.521616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.522219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.522254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.522773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.522805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.523313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.523885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.523918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.524477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.524510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.525120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.525161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.525754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.525786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.526386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.526420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.527020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.527064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.527606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.527638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.528221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.528238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.528799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.528831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.529420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.529454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.530060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.530095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.530703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.530736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.531325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.531359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.531985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.532017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.532614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.532651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.533269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.533303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.533942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.533974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.534730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.534815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.535373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.535417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.536027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.536077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.536621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-07-26 14:08:20.536653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.344 qpair failed and we were unable to recover it. 00:26:53.344 [2024-07-26 14:08:20.537164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.537199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.537710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.537742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.538330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.538365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.538956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.538988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.539565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.539600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.540202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.540220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.540798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.540832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.541436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.541471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.542279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.542315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.542820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.542852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.543419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.543453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.543917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.543949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.544477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.544520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.545031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.545058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.545530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.545563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.546137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.546172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.546680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.546713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.547199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.547217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.547678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.547711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.548291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.548309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.548835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.548868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.549537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.549573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.550183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.550217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.550811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.550844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.551368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.551402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.551908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.551941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.552442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.552475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.553010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.553054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.553643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.553659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.554212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.554246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.554783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.554815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.555396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.555430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.556032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.556078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.556689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.556723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.557234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.557268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.557831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.557863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.558475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.558509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-07-26 14:08:20.559093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-07-26 14:08:20.559127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.559707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.559739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.560265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.560281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.560753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.560786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.561349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.561384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.561992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.562024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.562667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.562700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.563308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.563342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.563917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.563949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.564543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.564576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.565123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.565157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.565700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.565739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.566306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.566340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.566856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.566897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.567428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.567462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.567967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.567999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.568518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.568552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.569056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.569090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.569592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.569623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.570136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.570170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.570752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.570768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.571243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.571277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.571880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.571912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.572477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.572511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.573017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.573068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.573555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.573589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.574088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.574122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.574680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.574712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.575214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.575248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.575743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.575777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.576216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.576757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.576790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.577303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.577336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.577923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.577954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.578554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.578589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.578901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.578933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.579213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.579253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-07-26 14:08:20.579791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-07-26 14:08:20.579823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.580326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.580360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.580969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.581001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.581575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.581608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.582186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.582220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.582736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.582767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.583267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.583301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.583873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.583905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.584499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.584532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.584976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.585008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.585573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.585605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.586101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.586134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.586649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.586681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.587177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.587210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.587768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.587806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.588386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.588420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.588988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.589004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.589578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.589610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.590188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.590221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.590746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.590778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.591231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.591264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.591844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.591875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.592394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.592427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.592899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.592931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.593353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.593394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.593951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.593967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.594561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.594595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.594896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.594928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.595486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.595519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.596038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.596080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-07-26 14:08:20.596642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-07-26 14:08:20.596673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.597253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.597286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.597870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.597902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.598481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.598515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.598970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.599001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.599508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.599540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.600037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.600080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.600665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.600696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.601259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.601292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.601874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.601905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.602481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.602514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.603136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.603169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.603724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.603755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.604307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.604340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.604870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.604902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.605463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.605496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.606074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.606107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.606701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.606732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.607352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.607384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.607916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.607947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.608520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.608552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.609150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.609183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.609768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.609799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.610396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.610430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.611015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.611072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.611640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.611672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.612173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.612206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.612781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.612813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.613468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.613501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.614079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.614113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.614705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.614737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.615317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.615350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.616168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.616216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.616794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.616825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.617378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.617410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.618002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.618018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.618512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.618546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.619028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-07-26 14:08:20.619072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-07-26 14:08:20.619595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.619627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.620133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.620166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.620745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.620777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.621227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.621260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.621775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.621807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.622303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.622336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.622766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.622797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.623349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.623392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.623722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.623754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.624309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.624342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.624847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.624878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.625470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.625502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.626075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.626109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.626683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.626716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.627324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.627359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.627925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.627956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.628587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.628626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.629190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.629224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.629839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.629871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.630485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.630518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.631076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.631109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.631624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.631656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.632173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.632208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.632781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.632813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.633400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.633432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.634025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.634076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.634600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.634637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.635221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.635849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.635882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.636474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.636507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.636969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.637000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.637492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.637525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.638059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.638092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.638619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.638651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.639210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.639243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.639820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.639851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.640367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.640400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-07-26 14:08:20.640854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-07-26 14:08:20.640885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.641442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.641476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.641991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.642023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.642625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.642658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.643287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.643320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.643911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.643943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.644498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.644531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.644978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.645009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.645525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.645557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.646022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.646062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.646524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.646540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.647114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.647148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.647602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.647635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.648237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.648284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.648807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.648838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.649454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.649488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.650031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.650085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.650672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.650704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.651294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.651311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.651790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.651822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.652381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.652415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.653066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.653099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.653625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.653657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.654163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.654197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.654760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.654794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.655299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.655334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.655785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.655815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.656319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.656354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.656907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.656939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.657526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.657565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.658144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.658178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.658703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.658736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.659318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.659352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.659863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.659896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.660497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.660530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.661039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.661082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.661609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.661641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.662204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-07-26 14:08:20.662237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-07-26 14:08:20.662699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.662730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.663249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.663283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.663892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.663925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.664511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.664545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.665112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.665145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.665606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.665641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.666332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.666367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.666886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.666917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.667366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.667399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.667982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.668013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.668510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.668543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.669056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.669088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.669619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.669651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.670223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.670258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.670798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.670830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.671390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.671424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.671994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.672026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.672596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.672629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.673209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.673242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.673821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.673853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.674370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.674403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.674907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.674924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.675498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.675532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.676162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.676197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.676714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.676748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.677341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.677374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.677897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.677929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.678393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.678427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.678997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.679030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.679551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.679584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.680158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.680192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.680669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.680706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.681314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.681348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.681953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.681985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.682459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.682492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.683108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.683142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-07-26 14:08:20.683594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-07-26 14:08:20.683610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.684169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.684186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.684883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.684918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.685507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.685550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.686077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.686097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.686578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.686610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.687150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.687186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.687754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.687786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.688303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.688336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.688855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.688887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.689468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.689501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.690109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.690141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.690730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.690746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.691299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.691332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.691866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.691898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.692387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.692404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.692882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.692914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.693427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.693459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.694073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.694090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.694507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.694539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.695066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.695099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.695660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.695692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.696231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.696266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.696823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.696854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.697483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.697517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.698125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.698158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.698739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.698771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.699405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.699438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.699862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.699893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.700417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.700450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.700900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.700933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.701492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.701526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.702077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.702109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-07-26 14:08:20.702575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-07-26 14:08:20.702607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.703166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.703199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.703727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.703764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.704403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.704436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.705017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.705067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.705604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.705636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.706090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.706124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.706726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.706758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.707293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.707326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.707885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.707917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.708431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.708464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.708989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.709021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.709495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.709541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.709957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.709974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.710502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.710534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.711011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.711066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.711564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.711597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.712121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.712154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.712710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.712742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.713261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.713294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.713805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.713838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.714489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.714523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.715031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.715054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.715597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.715629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.716083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.716117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.716658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.716690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.717209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.717243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.717777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.717809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.718419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.718451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.718998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.719031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.719555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.719588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.720166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.720211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.720676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.720708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.721281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.721316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.721772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.721804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.722543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.722577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.723029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.723070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-07-26 14:08:20.723577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-07-26 14:08:20.723609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.724228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.724261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.724843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.724876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.725484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.725518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.726105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.726138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.726611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.726656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.727223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.727242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.727773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.727791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.728349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.728367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.728832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.728850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.729328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.729347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.729796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.729832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.730371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.730407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.731926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.731962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.732593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.732633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.733222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.733259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.733853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.733872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.734394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.734433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.734958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.734994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.735478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.735497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.735987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.736005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.736493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.736513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.736998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.737033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.737648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.737666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.738233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.738273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.738788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.738806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.739356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.739375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.739806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.739824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.740301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.740319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.740795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.740813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.741391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.741434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.741906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.741948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.742487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.742524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.743070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.743089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.743520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.743537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.744061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.744081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.744549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.744568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.744988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.354 [2024-07-26 14:08:20.745006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.354 qpair failed and we were unable to recover it. 00:26:53.354 [2024-07-26 14:08:20.745535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.745555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.745977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.745996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.746446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.746463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.746902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.746920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.747522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.747541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.747927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.747943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.748380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.748399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.748858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.748879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.749493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.749512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.750006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.750027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.750535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.750553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.751117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.751136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.751556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.751574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.752095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.752114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.752649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.752668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.753181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.753203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.754301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.754336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.754986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.755005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.755654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.755673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.756283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.756301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.756717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.756735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.757327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.757370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.757910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.757947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.758466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.758485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.759057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.759076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.759502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.759521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.759997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.760015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.760511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.760531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.761083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.761103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.761590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.761626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.762179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.762215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.762727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.762748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.763294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.763313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.763733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.764177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.764215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.764666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.764700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.765221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.765243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.355 [2024-07-26 14:08:20.765719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.355 [2024-07-26 14:08:20.765738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.355 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.766280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.766297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.766773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.766791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.767281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.767297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.767727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.767744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.768218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.768236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.768658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.768675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.769263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.769281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.769827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.769843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.356 [2024-07-26 14:08:20.770342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.356 [2024-07-26 14:08:20.770363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.356 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.771007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.771062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.771649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.771681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.772221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.772238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.772731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.772762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.773324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.773359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.773952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.773983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.774533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.774565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.775110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.775144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.775668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.775699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.776250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.776281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.776793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.776825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.777433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.777468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.778029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.778074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.778686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.778717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.779315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.779349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.779874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.779905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.780412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.780445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.780960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.780992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.781519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.781553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.782010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.782039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.782521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.782553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.783184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.783218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.783745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.783776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.784297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.784330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.784792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.784823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.785410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.623 [2024-07-26 14:08:20.785443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.623 qpair failed and we were unable to recover it. 00:26:53.623 [2024-07-26 14:08:20.785959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.785989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.786522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.786555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.787219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.787252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.787770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.787800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.788320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.788353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.788942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.788973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.789624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.789656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.790181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.790196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.790673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.790704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.791291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.791325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.791852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.791883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.792488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.792521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.793075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.793091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.793588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.793602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.794126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.794165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.794624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.794655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.795271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.795287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.795892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.795923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.796463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.796495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.797106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.797139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.797705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.797736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.798278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.798311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.798838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.798870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.799341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.799375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.799875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.799906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.800509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.800541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.801017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.801075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.801656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.801687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.802283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.802298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.802773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.802804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.803280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.803313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.803913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.803943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.804515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.804546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.805084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.805117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.805651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.805682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.806275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.806307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.806909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.624 [2024-07-26 14:08:20.806940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.624 qpair failed and we were unable to recover it. 00:26:53.624 [2024-07-26 14:08:20.807409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.807441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.807903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.807934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.808389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.808421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.808954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.808986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.809806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.809841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.810379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.810412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.810922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.810953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.811461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.811493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.811943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.811973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.812478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.812511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.813069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.813102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.813700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.813732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.814350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.814382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.814991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.815023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.815576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.815608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.816201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.816233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.816782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.816813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.817394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.817432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.817910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.817940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.818482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.818514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.819142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.819174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.819705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.819736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.820265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.820298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.820832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.820864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.821486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.821518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.821980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.821995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.822516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.822548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.823156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.823188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.823643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.823658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.824081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.824113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.824665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.824696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.825332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.825365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.825815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.825829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.826618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.826652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.827265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.827281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.827855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.827886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.625 [2024-07-26 14:08:20.828447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.625 [2024-07-26 14:08:20.828479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.625 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.828940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.828970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.829527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.829560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.830131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.830163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.830676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.830707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.831270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.831302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.831881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.831911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.832430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.832462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.832965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.832981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.833498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.833530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.834155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.834187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.834646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.834678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.835246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.835278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.835788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.835830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.836316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.836331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.836852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.836866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.837476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.837509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.838064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.838096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.838607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.838637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.839246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.839278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.839802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.839832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.840384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.840416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.840942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.840973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.841479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.841511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.842025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.842067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.842519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.842550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.843110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.843143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.843613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.843644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.844091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.844123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.844644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.844676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.845307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.845338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.845846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.845877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.846482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.846514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.846980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.847011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.847576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.847608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.848130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.848164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.848674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.848705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.849331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.626 [2024-07-26 14:08:20.849363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.626 qpair failed and we were unable to recover it. 00:26:53.626 [2024-07-26 14:08:20.849967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.849998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.850566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.850599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.851066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.851098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.851577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.851607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.852126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.852158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.852668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.852700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.853295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.853327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.853873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.853887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.854348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.854363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.855041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.855083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.855616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.855653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.856213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.856245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.856783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.856815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.857401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.857433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.857968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.857999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.858588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.858621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.859249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.859281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.859800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.859815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.860389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.860422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.860896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.860927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.861462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.861493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.862131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.862163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.862754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.862785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.863396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.863427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.864030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.864075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.864618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.864649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.865281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.865314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.865836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.865867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.866492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.866525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.867096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.867128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.867652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.867682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.627 [2024-07-26 14:08:20.868298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.627 [2024-07-26 14:08:20.868330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.627 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.868836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.868867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.869435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.869467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.869929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.869960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.870410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.870442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.871012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.871050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.871561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.871592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.872199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.872215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.872774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.872804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.873388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.873421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.874056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.874088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.874597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.874628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.875223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.875255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.875785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.875816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.876394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.876427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.876964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.876995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.877496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.877528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.878067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.878099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.878569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.878600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.879195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.879232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.879951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.879985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.880524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.880555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.881157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.881189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.881652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.881683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.882203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.882236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.882827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.882856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.883489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.883521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.884169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.884202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.884666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.884696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.885172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.885187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.885601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.885617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.886133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.886164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.886722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.886752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.887338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.887370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.887849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.628 [2024-07-26 14:08:20.887880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.628 qpair failed and we were unable to recover it. 00:26:53.628 [2024-07-26 14:08:20.888437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.888469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.888974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.889004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.889603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.889635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.890231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.890263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.890765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.890796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.891375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.891407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.891934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.891965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.892544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.892577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.893108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.893140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.893654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.893685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.894306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.894339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.894944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.894975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.895470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.895503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.896132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.896163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.896686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.896717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.897284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.897322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.897916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.897958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.898455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.898487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.898985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.899015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.899584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.899615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.900134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.900168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.900706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.900737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.901315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.901347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.901942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.901972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.902578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.902615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.903123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.903155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.903783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.903812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.904408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.905062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.905094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.905876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.905910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.906511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.906542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.907165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.907198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.907738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.907769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.908339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.908371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.908969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.908999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.909603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.629 [2024-07-26 14:08:20.909619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.629 qpair failed and we were unable to recover it. 00:26:53.629 [2024-07-26 14:08:20.910176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.910209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.910669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.910699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.911271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.911287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.911838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.911852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.912354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.912386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.912899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.912931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.913492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.913524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.914059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.914090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.914612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.914643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.915165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.915197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.915704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.915735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.916370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.916402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.916962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.916993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.917546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.917579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.918198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.918229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.918853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.918886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.919409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.919424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.919952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.919982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.920593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.920626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.921254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.921286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.921822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.921853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.922711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.922794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.923457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.923499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.924120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.924154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.924678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.924710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.925263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.925279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.925888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.925919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.926476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.926508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.927084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.927125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.927685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.927715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.928303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.928350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.928801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.928832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.929441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.929474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.930067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.930099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.930618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.930648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.630 [2024-07-26 14:08:20.931203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.630 [2024-07-26 14:08:20.931236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.630 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.931693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.931723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.932206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.932221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.932698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.932729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.933307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.933341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.933794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.933825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.934358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.934389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.934963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.934994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.935523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.935556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.936014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.936054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.936638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.936669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.937204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.937237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.937751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.937782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.938376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.938408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.938921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.938953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.939411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.939443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.940016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.940056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.940588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.940618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.941159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.941192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.941656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.941686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.942257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.942289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.942869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.942900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.943511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.943543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.944092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.944107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.944593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.944624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.945198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.945232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.945696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.945726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.946301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.946317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.946849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.946888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.947361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.947393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.947928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.947958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.948469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.948500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.949073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.949106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.949626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.949662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.950276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.950308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.950823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.950854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.631 qpair failed and we were unable to recover it. 00:26:53.631 [2024-07-26 14:08:20.951436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.631 [2024-07-26 14:08:20.951468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.952022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.952062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.952597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.952629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.953195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.953227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.953837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.953868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.954477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.954509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.954968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.954999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.955463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.955497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.956094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.956126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.956581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.956596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.957206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.957237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.957768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.957799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.958387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.958419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.958895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.958926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.959489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.959505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.960033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.960074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.960660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.960691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.961122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.961153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.961629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.961661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.962238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.962269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.962855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.962885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.963492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.963524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.963971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.963986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.964552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.964567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.965065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.965099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.965675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.965707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.966301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.966317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.966796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.966829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.967415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.967447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.967971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.632 [2024-07-26 14:08:20.968001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.632 qpair failed and we were unable to recover it. 00:26:53.632 [2024-07-26 14:08:20.968503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.968533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.969149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.969181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.969695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.969725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.970327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.970360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.970821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.970835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.971390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.971423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.971983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.972014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.972571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.972609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.973386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.973422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.974035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.974088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.974600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.974631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.975241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.975275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.975739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.975771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.976289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.976321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.976882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.976901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.977438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.977455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.977935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.977952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.978447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.978482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.979002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.979036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.979565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.979582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.980147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.980164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.980640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.980656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.981196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.981214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.981693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.981709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.982136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.982153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.982581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.982597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.983200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.983219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.983646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.983663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.984198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.984215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.984706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.984740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.985298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.985316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.985738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.985756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.986306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.986325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.986794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.986810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.987384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.633 [2024-07-26 14:08:20.987401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.633 qpair failed and we were unable to recover it. 00:26:53.633 [2024-07-26 14:08:20.988647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.988679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.989270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.989290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.989781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.989816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.990356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.990373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.990933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.990950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.991429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.991446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.991865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.991881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.992285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.992302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.992786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.992802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.993327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.993346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.993822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.993838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.994356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.994376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.994882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.994903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.995400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.995416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.995908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.995924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.996457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.996474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.997060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.997078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.997586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.997603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.998180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.998197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.998752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.998768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.999270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.999287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:20.999770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:20.999786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.000226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.000257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.000726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.000755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.001255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.001279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.001790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.001809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.002381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.002412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.003014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.003039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.003574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.003596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.004034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.004070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.004577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.004602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.005171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.005193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.005648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.005671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.006249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.006282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.006793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.006814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.007349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.634 [2024-07-26 14:08:21.007379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.634 qpair failed and we were unable to recover it. 00:26:53.634 [2024-07-26 14:08:21.007846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.007877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.008362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.008397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.008999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.009032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.009530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.009554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.010062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.010086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.010585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.010611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.011242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.011280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.011811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.011826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.012355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.012389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.012911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.012942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.013412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.013428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.013868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.013883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.014377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.014393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.014819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.014833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.015337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.015352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.015948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.015979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.016553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.016573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.017011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.017026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.017498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.017514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.017983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.017999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.018496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.018530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.019106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.019139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.019966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.019981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.020517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.020532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.021001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.021016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.021506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.021521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.021943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.021958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.022493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.022508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.022983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.022998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.023473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.023488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.023979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.023994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.024478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.024493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.025008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.025023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.025449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.025465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.025941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.025956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.026427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.635 [2024-07-26 14:08:21.026443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.635 qpair failed and we were unable to recover it. 00:26:53.635 [2024-07-26 14:08:21.026870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.026883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.027381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.027398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.027947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.027961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.028485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.028501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.028988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.029003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.029556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.029571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.030216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.030232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.030787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.031363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.031378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.031881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.031895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.032419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.032435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.033001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.033016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.033495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.033511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.033936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.033950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.034383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.034398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.035146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.035163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.035649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.035663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.036207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.036224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.036642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.036655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.037189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.037204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.037626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.037644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.038216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.038233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.038752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.038766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.039262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.039278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.039843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.039857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.040261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.040277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.040771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.040786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.041483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.041500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.042058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.042074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.042512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.042526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.042945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.042960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.043460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.043475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.043950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.043964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.044486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.044502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.045138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.045154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.045693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.045707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.046244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.636 [2024-07-26 14:08:21.046259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.636 qpair failed and we were unable to recover it. 00:26:53.636 [2024-07-26 14:08:21.046771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.046786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.047353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.047368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.047852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.047866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.048349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.048364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.048888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.048902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.049426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.049441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.049907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.049922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.050396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.050411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.050878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.050891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.637 [2024-07-26 14:08:21.051471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.637 [2024-07-26 14:08:21.051486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.637 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.052007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.052024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.052518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.052532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.052949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.052963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.053425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.053441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.053909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.053923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.054487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.054502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.054976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.054990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.055540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.055554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.056171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.056187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.056600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.056614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.057085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.057099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.057564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.058213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.058228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.058699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.058716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.059190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.059204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.059673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.059687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.060286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.060300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.060781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.060795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.061310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.061324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.061851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.061866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.062381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.062396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.062914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.062927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.063343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.063358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.063874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.063888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.064364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.064379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.064901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.064915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.065470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.065486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.065917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.065931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.066456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.066470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.067055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.067069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.067539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.067553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.068041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.068064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.068534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.068548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.069127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.907 [2024-07-26 14:08:21.069142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.907 qpair failed and we were unable to recover it. 00:26:53.907 [2024-07-26 14:08:21.069658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.069672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.070208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.070223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.070707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.070721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.071137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.071151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.071608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.071623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.072169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.072184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.072601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.072615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.073132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.073148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.073797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.073811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.074375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.074390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.074802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.074816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.075351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.075365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.075877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.075890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.076519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.076534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.077091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.077106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.077620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.077634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.078177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.078192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.078673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.078686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.079231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.079246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.079713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.079729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.080203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.080217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.080676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.080690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.081243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.081258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.081844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.081858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.082443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.082458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.082880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.082894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.083383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.083399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.083809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.083823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.084282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.084297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.084995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.085009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.085565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.085580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.086106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.086120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.086635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.086648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.087120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.087135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.087597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.087610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.088207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.088221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.088777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.908 [2024-07-26 14:08:21.088791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.908 qpair failed and we were unable to recover it. 00:26:53.908 [2024-07-26 14:08:21.089369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.089384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.089868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.089882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.090447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.090461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.090943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.090957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.091414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.091428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.091957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.091971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.092479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.092493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.092946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.092960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.093424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.093438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.094049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.094064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.094548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.094561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.095068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.095088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.095504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.095518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.095929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.095942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.096464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.096479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.096965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.096978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.097461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.097475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.098073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.098088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.098559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.098572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.099041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.099062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.099536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.099551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.100131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.100145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.100639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.100655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.101174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.101191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.101655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.101668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.102206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.102220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.102631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.102644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.103212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.103226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.103715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.103729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.104302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.104315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.104729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.104743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.105228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.105242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.105673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.105687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.106258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.106272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.106849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.106863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.107413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.107427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.909 [2024-07-26 14:08:21.107836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.909 [2024-07-26 14:08:21.107849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.909 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.108307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.108321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.108736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.108750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.109259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.109273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.109813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.109826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.110348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.110362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.110877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.110891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.111292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.111306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.111815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.111829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.112335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.112349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.112812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.112825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.113351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.113366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.113919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.113932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.114444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.114459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.114933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.114947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.115412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.115426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.115910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.115923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.116476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.116490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.117076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.117090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.117668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.117681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.118265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.118279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.118797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.118810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.119269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.119283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.119742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.119756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.120308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.120321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.120853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.120866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.121412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.121425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.121986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.122000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.122528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.122542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.123002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.123015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.123595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.123609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.124163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.124177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.124678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.124691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.125230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.125244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.125724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.125737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.126231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.126245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.126752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.126765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.127341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.910 [2024-07-26 14:08:21.127355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.910 qpair failed and we were unable to recover it. 00:26:53.910 [2024-07-26 14:08:21.127912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.127926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.128495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.128509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.129036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.129055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.129543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.129557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.130066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.130080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.130651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.130665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.131219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.131233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.131707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.131720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.132238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.132253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.132744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.132757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.133448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.133462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.133956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.133970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.134446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.134460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.134910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.134923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.135370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.135384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.135893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.135909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.136514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.136529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.137053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.137068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.137903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.137917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.138449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.138464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.138926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.138939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.139485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.139499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.140079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.140093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.140654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.140667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.141245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.141259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.141748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.141761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.142184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.142198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.142740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.142753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.143285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.143299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.143847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.911 [2024-07-26 14:08:21.143860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.911 qpair failed and we were unable to recover it. 00:26:53.911 [2024-07-26 14:08:21.144418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.144432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.144881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.144895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.145401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.145415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.145941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.145954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.146495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.146509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.146895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.146909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.147456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.147469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.148023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.148036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.148498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.148511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.149046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.149060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.149615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.149628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.150085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.150099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.150550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.150564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.151094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.151667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.151680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.152252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.152266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.152783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.152796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.153326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.153340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.153827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.153840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.154361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.154375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.154926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.154939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.155397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.155411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.155939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.155952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.156508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.156522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.157052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.157066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.157550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.157566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.158101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.158116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.158671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.158685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.159212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.159226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.159730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.159743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.160291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.160305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.160761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.160775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.161280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.161294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.161838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.161851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.162376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.162390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.162918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.162931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.163460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.912 [2024-07-26 14:08:21.163474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.912 qpair failed and we were unable to recover it. 00:26:53.912 [2024-07-26 14:08:21.163950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.163964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.164459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.164473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.164941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.164955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.165482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.165496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.166049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.166063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.166616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.166629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.167162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.167177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.167683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.167696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.168268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.168282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.168744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.168757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.169308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.169321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.169880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.169894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.170447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.170460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.171030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.171048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.171609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.171622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.172131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.172146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.172694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.172707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.173232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.173246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.173673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.173686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.174222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.174236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.174806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.174819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.175382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.175396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.175902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.175916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.176466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.176479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.177008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.177021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.177579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.177593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.178050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.178064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.178627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.178640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.179126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.179142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.179646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.179660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.180237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.180252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.180788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.180801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.181328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.181341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.181864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.181877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.182454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.182467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.182990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.183003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.183455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.183468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.913 [2024-07-26 14:08:21.183867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.913 [2024-07-26 14:08:21.183880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.913 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.184364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.184378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.184914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.184927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.185375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.185389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.185915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.185928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.186465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.186479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.187016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.187029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.187601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.187614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.188164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.188178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.188705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.188719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.189226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.189239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.189763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.189776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.190317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.190331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.190884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.190897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.191472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.191486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.192047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.192069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.192490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.192504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.193034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.193052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.193631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.193644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.194207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.194222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.194728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.194742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.195268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.195282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.195804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.195818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.196294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.196310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.196774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.196787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.197311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.197325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.197853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.197866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.198399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.198414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.198870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.198885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.199415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.199429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.199916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.199930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.200456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.200477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.200948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.200960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.201530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.201545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.202055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.202069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.202574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.202588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.203118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.203132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.203538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.914 [2024-07-26 14:08:21.203553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.914 qpair failed and we were unable to recover it. 00:26:53.914 [2024-07-26 14:08:21.204090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.204105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.204559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.204573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.205114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.205128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.205531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.205545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.206039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.206059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.206544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.206558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.207145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.207159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.207702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.207716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.208223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.208237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.208635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.208648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.209165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.209180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.209650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.209665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.210193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.210208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.210727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.210740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.211276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.211291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.211795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.211808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.212359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.212373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.212880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.212893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.213418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.213432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.213847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.213861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.214393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.214408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.214861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.214874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.215408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.215422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.215976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.215989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.216467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.216481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.217037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.217055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.217603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.217617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.218156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.218170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.218653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.218666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.219221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.219235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.219702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.219715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.220262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.220276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.220761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.220774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.221270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.221286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.221694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.221707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.222236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.222250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.222726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.915 [2024-07-26 14:08:21.222739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.915 qpair failed and we were unable to recover it. 00:26:53.915 [2024-07-26 14:08:21.223289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.223303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.223816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.223829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.224338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.224352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.224814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.224827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.225407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.225421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.225933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.225946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.226452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.226466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.227051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.227064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.227520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.227533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.228393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.228409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.228821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.228835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.229346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.229359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.229827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.229840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.230344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.230357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.230861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.230874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.231285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.231298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.231760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.231773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.232207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.232222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.232751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.232764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.233222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.233236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.233757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.233771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.234299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.234313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.234820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.234833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.235403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.235417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.235948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.235961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.236513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.236527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.236985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.236998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.237528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.237542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.237966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.237979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.238509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.238523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.239005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.916 [2024-07-26 14:08:21.239018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.916 qpair failed and we were unable to recover it. 00:26:53.916 [2024-07-26 14:08:21.239602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.239616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.240098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.240112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.240577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.240590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.241120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.241134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.241596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.241609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.242078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.242094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.242499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.242513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.243101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.243115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.243670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.243683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.244170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.244184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.244694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.244707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.245274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.245287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.245743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.245757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.246284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.246298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.246831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.246844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.247237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.247251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.247765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.247778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.248306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.248321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.248835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.248848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.249372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.249386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.249915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.249928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.250444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.250457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.250971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.250985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.251520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.251535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.252085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.252099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.252654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.252668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.253206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.253220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.253781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.253795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.254292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.254306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.254762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.254775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.255278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.255292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.255821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.255834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.256368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.256382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.256962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.256976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.257524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.257538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.257944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.257957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.258514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.258528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.917 qpair failed and we were unable to recover it. 00:26:53.917 [2024-07-26 14:08:21.258996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.917 [2024-07-26 14:08:21.259009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.259521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.259535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.260082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.260096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.260608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.260621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.261216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.261230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.261803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.261816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.262368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.262382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.262852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.262866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.263324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.263340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.263864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.263877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.264387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.264402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.264852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.264865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.265396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.265410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.265963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.265976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.266431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.266445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.266949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.266963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.267535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.267549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.268099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.268113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.268691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.268704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.269203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.269217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.269748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.269761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.270265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.270279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.270831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.270844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.271428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.271442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.271993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.272007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.272418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.272432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.272843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.272856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.273388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.273402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.273921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.273934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.274464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.274477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.274959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.274972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.275506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.275520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.276051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.276065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.276580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.276593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.277171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.277185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.277670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.277684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.278144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.278158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.278691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.918 [2024-07-26 14:08:21.278704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.918 qpair failed and we were unable to recover it. 00:26:53.918 [2024-07-26 14:08:21.279174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.279188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.279718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.279731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.280287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.280301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.280854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.280867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.281400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.281414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.281971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.281984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.282542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.282556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.283008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.283020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.283555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.283569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.284100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.284114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.284634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.284650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.285160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.285174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.285743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.285756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.286269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.286282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.286747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.286760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.287274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.287288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.287868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.287881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.288422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.288436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.288987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.289000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.289460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.289474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.290004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.290017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.290580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.290594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.291149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.291162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.291685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.291698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.292183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.292198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.292728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.292741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.293146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.293160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.293690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.293703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.294255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.294269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.294812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.294825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.295358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.295372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.295887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.295900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.296410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.296423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.296956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.296969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.297525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.297539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.298089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.298103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.298613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.919 [2024-07-26 14:08:21.298626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.919 qpair failed and we were unable to recover it. 00:26:53.919 [2024-07-26 14:08:21.299194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.299208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.299759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.299772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.300348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.300362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.300920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.300933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.301485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.301499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.301925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.301938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.302472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.302486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.303037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.303054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.303580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.303594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.304118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.304132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.304708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.304721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.305274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.305288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.305795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.305808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.306381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.306399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.306854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.306867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.307389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.307403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.307932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.307946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.308476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.308490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.309062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.309076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.309633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.309646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.310152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.310166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.310634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.310647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.311121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.311135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.311650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.311663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.312192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.312205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.312736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.312749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.313233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.313247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.313756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.313770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.314300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.314313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.314892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.314905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.315463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.315477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.316028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.316047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.316524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.316538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.316992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.317005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.317536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.317550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.318101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.318114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.920 qpair failed and we were unable to recover it. 00:26:53.920 [2024-07-26 14:08:21.318680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.920 [2024-07-26 14:08:21.318693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.319119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.319133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.319664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.319677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.320230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.320244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.320736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.320749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.321259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.321273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.321831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.321844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.322301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.322315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.322843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.322857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.323339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.323353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.323914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.323927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.324408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.324422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.324946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.324959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.325415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.325430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.325944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.325957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.326442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.326455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.326984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.326997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.327445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.327462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.327999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.328012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.328575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.328589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.329094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.329108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.329633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.329647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.330205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.330219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.330736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.330749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.331269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.331282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.331855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.331869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.332442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.332456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.332992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.333005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:53.921 [2024-07-26 14:08:21.333538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.921 [2024-07-26 14:08:21.333552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:53.921 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.334373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.334390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.334922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.334935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.335389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.335402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.335860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.335873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.336370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.336384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.336916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.336930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.337434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.337449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.338019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.338032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.338518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.338532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.339059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.339073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.339601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.339615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.340145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.340159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.340743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.340757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.341285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.341299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.341753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.341766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.342247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.342262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.342716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.342729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.343259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.343273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.343817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.343830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.344381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.344403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.344977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.344990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.345549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.345563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.346124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.346137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.346727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.346741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.347325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.347339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.347919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.347932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.348458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.348472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.349007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.349021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.349590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.349606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.350167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.350181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.350688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.350701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.351271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.351285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.351841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.351854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.352403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.352416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.352895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.352908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-26 14:08:21.353446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-26 14:08:21.353460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.353986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.353999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.354527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.354541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.355110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.355124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.355668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.355682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.356200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.356215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.356669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.356682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.357213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.357227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.357696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.357710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.358240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.358254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.358793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.358806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.359324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.359338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.359796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.359810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.360290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.360304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.360890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.360903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.361457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.361471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.361935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.361948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.362502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.362516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.363072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.363087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.363617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.363630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.364094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.364108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.364659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.364672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.365225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.365239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.365772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.365785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.366339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.366353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.366915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.366928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.367494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.367508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.368031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.368048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.368585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.368598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.369132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.369146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.369675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.369688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.370138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.370152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.370684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.370697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.371248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.371264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.371794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.371807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.372341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.372355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-26 14:08:21.372889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-26 14:08:21.372902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.373361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.373374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.373823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.373837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.374366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.374380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.374941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.374955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.375409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.375423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.375937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.375950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.376507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.376521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.377083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.377097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.377653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.377667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.378148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.378162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.378691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.378705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.379233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.379247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.379793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.379806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.380385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.380399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.380953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.380967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.381509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.381522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.382088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.382102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.382678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.382691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.383166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.383180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.383733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.383746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.384299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.384313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.384711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.384724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.385184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.385197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.385767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.385781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.386356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.386370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.386929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.386942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.387497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.387510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.388063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.388077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.388560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.388573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.389027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.389041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.389592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.389606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-26 14:08:21.390119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-26 14:08:21.390133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.390641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.390655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.391221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.391235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.391689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.391702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.392227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.392241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.392806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.392822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.393375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.393389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.393931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.393944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.394497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.394511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.395092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.395106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.395592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.395605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.396127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.396141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.396700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.396712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.397197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.397211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.397682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.397695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.398199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.398213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.398744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.398757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.399307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.399322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.399852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.399865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.400329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.400343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.400868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.400881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.401356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.401370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.401839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.401852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.402421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.402435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.403010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.403023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.403586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.403600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.404126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.404140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.404647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.404660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.405186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.405200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.405707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.405721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.406137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.406151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.406702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.406715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.407292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.407308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.407764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.407777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.408318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.408332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.408910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.408923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.409474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.409488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.410047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.410061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.410577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.410590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.411068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.411082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.411540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.411553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.412088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.412102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.412657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.412670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.413218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.413232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.413768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.413781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.414333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.414348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.414937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.414950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.415452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.415466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.415927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.415940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.416468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.416482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.417025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.417038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.417553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.417566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.418138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.418152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.418657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.418670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.419222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.419236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.419685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-26 14:08:21.419698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-26 14:08:21.420177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.420191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.420644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.420657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.421187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.421201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.421776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.421789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.422307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.422321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.422845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.422858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.423314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.423328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.423847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.423861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.424440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.424453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.424986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.424999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.425524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.425538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.426023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.426036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.426569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.426583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.427132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.427146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.427579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.427593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.428148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.428161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.428701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.428716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.429267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.429281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.429819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.429833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.430351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.430365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.430945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.430959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.431436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.431450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.431953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.431967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.432498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.432512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.433057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.433071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.433541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.433554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.434048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.434062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.434609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.434622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.435183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.435197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.435668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.435681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.436218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.436232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.436726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.436739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.437307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.437321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.437829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.437842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.438372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.438386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.438941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.438954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.439460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.439474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.440002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.440016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.440577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.440593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.441155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.441169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.441678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.441691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.442270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.442284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.442769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.442782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.443336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.443350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.443930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.443944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.444423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.444437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-26 14:08:21.444898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-26 14:08:21.444912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.445457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.445470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.446021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.446034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.446585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.446599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.447130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.447144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.447618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.447632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.448145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.448165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.448690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.448704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.449281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.449295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.449751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.449764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.450237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.450254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.450792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.450806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.451280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.451293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.451790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.451803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.452239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.452253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.452777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.452790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.453299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.453313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.453885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.453899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.454465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.454479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.455054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.455068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.455645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.455657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.456160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.456174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.456748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.456762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.457267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.457281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.457742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.457756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.458284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.458298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.458771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.458785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.459318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.459333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.459862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.459875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.460399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.460413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.460871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.460884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.461412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.461425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.461958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.461971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.462500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.462514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.463071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.463085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.463641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.463654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.464171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.464184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.464768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.464781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.465319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.465332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.465862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.465875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.466335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.466349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.466878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.466891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.467396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.467410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.467957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.467970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.468485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.468499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.468970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.468983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.469541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.469555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.470090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.470104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.470560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.470573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.471103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.471121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.471600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.471615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.472074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.472088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.472660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.472673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.473249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.473264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.473815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.473828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.474379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.474392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.474948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.474961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-26 14:08:21.475491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-26 14:08:21.475505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.476034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.476061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.476617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.476630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.477106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.477120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.477676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.477690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.478286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.478300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.478903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.478916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.479381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.479395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.479890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.479903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.480427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.480441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.480967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.480981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.481440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.481454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.481983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.481996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.482552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.482566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.483078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.483092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.483547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.483560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.484011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.484024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.484485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.484499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.485019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.485032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.485519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.485532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.486067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.486081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.486526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.487007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.487020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.487543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.487557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.488056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.488070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.488623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.488637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.489187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.489201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.489654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.489667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.490196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.490210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.490771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.490784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.491308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.491322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.491828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.491841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.492396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.492410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.492865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.492881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.493329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.493343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.493824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.493837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.494318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.494332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.494867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.494880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.495353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.495367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.495935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.495948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.496497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.496511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.497034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.497052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.497629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.497643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.498106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.498120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.498589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.498603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.499154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.499168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.499721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.499734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.500285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.500299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.500830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.500844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.501380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.501393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.501903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.501916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.502464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.502477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.503035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.503053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.503587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.503601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.504141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.504155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.504658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.504672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.505219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.505233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-26 14:08:21.505808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-26 14:08:21.505821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.506362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.506376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.506904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.506918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.507489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.507503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.508058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.508072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.508569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.508582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.509113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.509127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.509613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.509627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.510184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.510198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.510748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.510762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.511291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.511305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.511834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.511847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.512378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.512392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.512876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.512889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.513286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.513300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.513813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.513827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.514409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.515211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.515226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.515697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.515710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.516195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.516209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.516767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.516780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.517263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.517277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.517809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.517823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.518303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.518317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.518847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.518860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.519417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.519431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.519984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.519998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.520443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.520457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.521010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.521023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.521510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.521524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.522054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.522069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.522643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.522656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.523212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.523226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.523780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.523794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.524329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.524343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.524801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.524814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.525344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.525358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.525911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.525925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.526445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.526459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.526851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.526864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.527379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.527393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.527978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.527991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.528520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.528534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.529010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.529023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.529607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.529621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.530203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.530218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.530692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.530706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.531223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.531237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.531692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.531706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.532246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.532260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.532809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.532822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.533219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-26 14:08:21.533233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-26 14:08:21.533733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.533746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.534208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.534222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.534779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.534793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.535343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.535357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.535916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.535932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.536489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.536503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.537029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.537046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.537496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.537510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.537968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.537981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.538490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.538504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.539057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.539071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.539522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.539535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.540064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.540077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.540636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.540650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.541124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.541138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.541650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.541664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.542065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.542079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.542587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.542600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.543148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.543162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.543617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.543630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.544137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.544636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.544649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.545178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.545191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.545742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.545755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.546193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.546207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.546727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.546740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.547272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.547285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.547810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.547823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.548396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.548411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.548948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.548961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.549492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.549506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.550070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.550105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.550628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.550644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.551204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.551220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.551789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.551803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.552287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.552309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.552797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.552810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.553364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.553378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.553950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.553963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.554529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.554542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.555097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.555112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.555688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.555701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.556138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.556152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.556679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.556692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.557180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.557194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.557706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.557719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.558249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.558262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.558811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.558824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.559326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.559341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.559875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.559889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.560445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.560458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.560919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.560932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.561469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.561482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.561963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.561977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.562455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.562469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.562952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.562964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.563366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.563380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.563829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.563842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.564376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.564390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.564948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.564961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.565511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.565525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.566049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-26 14:08:21.566062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-26 14:08:21.566613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.566626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.567082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.567096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.567631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.567644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.568170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.568184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.568717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.568731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.569254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.569267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.569813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.569825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.570350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.570364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.570934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.570947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.571449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.571463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.572011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.572025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.572552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.572566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.573107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.573120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.573560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.573573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.574033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.574049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.574604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.574617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.575098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.575121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.575626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.575639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.576026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.576039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.576523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.577011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.577024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.577533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.577547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.578092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.578106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.578689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.578705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.579265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.579279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.579836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.579849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.580404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.580417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.580924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.580938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.581491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.581504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.582099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.582112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.582684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.582696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.583259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.583273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.583776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.583789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.584364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.584378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.584842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.584855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.585379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.585393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.585862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.585875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.586418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.586433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.586985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.586998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.587491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.587505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.588036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.588054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.588597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.588610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.589181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.589195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.589753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.589767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.590320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.590333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.590811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.590824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.591359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.591373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.591926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.591939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.592473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.592487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.593039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.593061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.593628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.593644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.594173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.594186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.594764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.594776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.595312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.595326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.595832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.595845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.596395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.596408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.596938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.596951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.597481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.597495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.598049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.598062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.598605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.598619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.599069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.599083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-26 14:08:21.599540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-26 14:08:21.599554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.600007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.600020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.600575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.600589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.601074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.601088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.601622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.601635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.602193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.602206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.602765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.602778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.603237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.603251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.603705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.603718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.604195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.604209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.604759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.604772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.605273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.605287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.605814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.605827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.606387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.606401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.606965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.606978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.607530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.607543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.608099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.608121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.608680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.608693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.609242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.609255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.609761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.609774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.610247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.610261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.610829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.610842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.611415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.611429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.611969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.611983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.612519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.612533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.613106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.613120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.613664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.613678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.614179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.614193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.614692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.614705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.615162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.615175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.615591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.615604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.616054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.616068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.616612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.616625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.617203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.617217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-26 14:08:21.617712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-26 14:08:21.617725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.618243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.618259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.618741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.618754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.619249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.619263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.619741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.619755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.620252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.620265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.620815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.620828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.621340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.621353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.621935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.621949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.622458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.622472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.623041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.623061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.623627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.623641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.624182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.624197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.624653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.624666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.625193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.625213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.625694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.625707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.626238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.626251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.626805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.626818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.627344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.627358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.627864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-07-26 14:08:21.627877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-07-26 14:08:21.628334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.628348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.628834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.628848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.629346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.629361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.629905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.629921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.630472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.630486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.630894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.630907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.631415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.631429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.631980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.631993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.632453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.632467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.632975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.632988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.633491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.633505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.634048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.634062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.634616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.634629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.635132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.635145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.635624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.635637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.636162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.636176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.636718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.636731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.637256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.637270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.637844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.637858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.638407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.638421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.638981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.638994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.639406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.639420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.639865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.639878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.640414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.640427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.640956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.640969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.641527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.641541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.642009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.642023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.642490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.642505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-07-26 14:08:21.643017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-07-26 14:08:21.643031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Write completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Write completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Write completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Write completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Read completed with error (sct=0, sc=8) 00:26:54.468 starting I/O failed 00:26:54.468 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Read completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Read completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Read completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Read completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Read completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Read completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 Write completed with error (sct=0, sc=8) 00:26:54.469 starting I/O failed 00:26:54.469 [2024-07-26 14:08:21.643364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:54.469 [2024-07-26 14:08:21.643738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.643757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.644300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.644316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.644818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.644831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.645284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.645298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.645756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.645769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.646283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.646297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.646759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.646772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.647325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.647339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.647810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.647824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.648358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.648373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.648890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.648903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.649365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.649379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.649933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.649947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.650460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.650475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.650930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.650944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.651393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.651407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.651934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.651947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.652487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.652503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.652911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.652924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.653482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.653496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.653986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.653999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.654536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.654553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.654964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.654978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.655531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.655546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.656016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.656030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.656527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.656541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.657009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.657022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.657490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.657504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.658009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.658022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-07-26 14:08:21.658422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-07-26 14:08:21.658436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.658971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.658985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.659429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.659443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.659945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.659958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.660447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.660460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.660986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.660999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.661474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.661488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.661966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.661979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.662389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.662404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.662910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.662924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.663399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.663413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.663871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.663884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.664281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.664294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.664825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.664838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.665279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.665294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.665819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.665832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.666343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.666356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.666862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.666875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.667106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.667120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.667588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.667601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.668045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.668059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.668587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.668601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.669106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.669120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.669559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.669572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.670107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.670120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.670564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.670577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.671122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.671136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.671593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.671607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.672115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.672128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.672591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.672604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.673134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.673147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.673667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.673680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.673837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.673853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.674383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.674397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.674922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.674935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-07-26 14:08:21.675367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-07-26 14:08:21.675381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.675916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.675929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.676152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.676166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.676626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.676639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.677113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.677127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.677540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.677553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.678081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.678094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.678313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.678326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.678797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.678810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.679282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.679296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.679812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.679825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.680303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.680317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.680832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.680845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.681303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.681316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.681777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.681790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.682316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.682330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.682786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.682799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.683247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.683261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.683792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.683805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.684264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.684278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.684802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.684816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.685341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.685355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.685809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.685822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.686081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.686095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.686544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.686558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.687013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.687026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.687507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.687521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.687975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.687988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.688521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.688534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.689040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.689057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.689603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.689616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.690089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-07-26 14:08:21.690104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-07-26 14:08:21.690690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.690704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.691225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.691238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.691686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.691700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.692247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.692261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.692831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.692844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.693404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.693421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.693913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.693926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.694447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.694461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.694966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.694979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.695550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.695564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.696130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.696144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.696672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.696685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.697232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.697245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.697703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.697716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.698244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.698258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.698742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.698755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.699209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.699223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.699661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.699674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.700181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.700194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.700769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.700782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.701336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.701349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.701925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.701938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.702514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.702528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.702981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.702995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.703402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.703416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.703921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.703934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.704464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.704478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.704931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.704944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.705476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.705489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.706020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.706033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.706565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-07-26 14:08:21.706579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-07-26 14:08:21.707058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.707072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.707609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.707622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.708169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.708183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.708736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.708749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.709211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.709225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.709746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.709759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.710336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.710350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.710864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.710877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.711442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.711456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.712020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.712033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.712597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.712611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.713168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.713182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.713709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.713723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.714276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.714289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.714844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.714860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.715392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.715406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.715952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.715966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.716437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.716451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.716858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.716872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.717346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.717361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.717917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.717931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.718512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.718526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3117089 Killed "${NVMF_APP[@]}" "$@" 00:26:54.473 [2024-07-26 14:08:21.719053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.719068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:54.473 [2024-07-26 14:08:21.719522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.719536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:54.473 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.473 [2024-07-26 14:08:21.720052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.720066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:54.473 [2024-07-26 14:08:21.720591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.720607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.721096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.721110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-07-26 14:08:21.721565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-07-26 14:08:21.721578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.722111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.722126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.722678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.722691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.723218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.723232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.723638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.724105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.724120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.724654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.724668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.725173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.725188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.725638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.725653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.726179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.726192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.726678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.726691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.727284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.727305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3117823 00:26:54.474 [2024-07-26 14:08:21.727837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.727853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3117823 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:54.474 [2024-07-26 14:08:21.728379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.728394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3117823 ']' 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.474 [2024-07-26 14:08:21.728868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.728883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.474 14:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.474 [2024-07-26 14:08:21.731175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.731206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.731802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.731818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.732391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.732406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.732899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.732914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.733371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.733387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.733804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.733817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.734293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.734307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.734717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.734731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.735246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.735260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.735814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.735827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.736356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.736370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.736802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.736815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.737320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.737334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.737828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.737843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.738374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.738388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.738948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.738961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.739438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.739452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.739948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.739961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-07-26 14:08:21.740510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-07-26 14:08:21.740526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.740931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.740944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.741520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.741535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.741940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.741953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.742425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.742439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.742895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.742908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.743422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.743437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.743876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.743891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.744356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.744370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.744828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.744841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.745302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.745317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.745772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.745785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.746314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.746329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.746756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.746769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.747269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.747283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.747692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.747706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.748035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.748054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.748473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.748487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.748999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.749013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.749548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.749562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.749973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.749986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.750375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.750389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.750881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.750895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.751361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.751375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.751899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.751913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.752392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.752405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.752927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.752940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.753330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.753345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.753748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.753762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.754200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.754215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.754689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.754703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.755107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-07-26 14:08:21.755122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-07-26 14:08:21.755482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.755496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.755954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.755968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.756390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.756407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.756885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.756899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.757345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.757359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.757766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.757779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.758236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.758250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.758712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.758726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.759176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.759193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.759579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.759592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.760049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.760064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.760527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.760540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.760819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.760832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.761237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.761251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.761742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.761757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.762279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.762293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.762799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.762813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.763340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.763354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.763741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.763756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.764174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.764189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.764718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.764732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.765133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.765146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.765613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.765627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.766155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.766169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.766641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.766654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.766815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.766828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.767283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.767297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.767828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.767841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.768365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.768379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.768886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.768899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.769425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.769946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.769960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.770450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.770464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.770922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.770935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.771374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-07-26 14:08:21.771388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-07-26 14:08:21.771768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.771781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.772179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.772192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.772674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.772687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.773148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.773162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.773622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.773635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.774141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.774155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.774663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.774677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.774800] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:26:54.477 [2024-07-26 14:08:21.774847] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.477 [2024-07-26 14:08:21.775119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.775134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.775607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.775620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.776092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.776106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.776414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.776427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.776932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.776946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.777459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.777474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.777939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.777953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.778429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.778443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.778900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.778914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.779469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.779482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.780011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.780024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.780480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.780494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.781024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.781038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.781528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.781541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.782016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.782029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.782511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.782526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.782780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.782793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.783179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.783193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.783718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.783734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.784126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.784140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.784648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.784662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.785116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.785130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-07-26 14:08:21.785670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-07-26 14:08:21.785683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.786150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.786164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.786707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.786720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.787116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.787130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.787453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.787467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.787917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.787930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.788370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.788384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.788911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.788925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.789457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.789472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.789871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.789885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.790150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.790164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.790673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.790686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.791127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.791141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.791671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.791684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.792198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.792212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.792746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.792759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.793237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.793251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.793716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.793729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.794188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.794202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.794651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.794664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.795167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.795181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.795712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.795725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.796178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.796192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.796738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.796751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.797213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.797227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.797731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.797743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.798252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.798267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.798583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.798597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.799035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.799061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-07-26 14:08:21.799455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-07-26 14:08:21.799468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.799975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.799989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.800468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.800482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.800949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.800962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.801487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.801501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.801913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.801926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.802380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.802393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.802852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.802867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.479 [2024-07-26 14:08:21.803397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.803412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.803804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.803817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.804352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.804366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.804822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.804836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.805354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.805370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.805844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.805858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.806392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.806406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.806914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.806927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.807457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.807470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.807864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.807878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.808387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.808401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.808929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.808942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.809334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.809348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.809860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.809874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.810409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.810423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.810876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.810890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.811341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.811354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.811824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.811837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.812360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.812374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.812888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.812901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.813338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.813352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.813908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.813921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.814121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.814135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.814525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.814538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.815072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.815086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.815542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.815555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.815934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.815947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.816410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.816424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.816949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.816963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-07-26 14:08:21.817482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-07-26 14:08:21.817496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.818030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.818049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.818502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.818516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.818914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.818927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.819378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.819392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.819798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.819812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.820289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.820303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.820810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.820823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.821200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.821214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.821743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.821756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.822202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.822220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.822753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.822767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.823294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.823308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.823787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.823801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.824310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.824324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.824853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.824866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.825257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.825271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.825780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.825793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.826233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.826247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.826636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.826650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.827112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.827126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.827577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.827591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.828096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.828110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.828582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.828595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.829128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.829142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.829595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.829608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.830135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.830149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.830607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.830621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.831073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.831087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.831616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.831630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.832024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.832038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.832451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.832465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.832973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.832987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.833497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.833512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.834028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.834041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.834386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.834399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.834857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-07-26 14:08:21.834870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-07-26 14:08:21.835399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.835413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.835795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.835808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.836313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.836328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.836855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.836869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.837326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.837340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.837869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.837883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.838347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.838361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.838876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.838890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.839419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.839434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.839892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.839906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.840428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.840443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.840898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.840912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.841382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.841396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.841407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.481 [2024-07-26 14:08:21.841926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.841941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.842418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.842433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.842967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.842981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.843487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.843501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.843976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.843990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.844520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.844534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.845063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.845078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.845612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.845628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.846071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.846085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.846617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.846631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.847166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.847181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.847726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.847740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.847969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.847982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.848486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.848505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.848969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.848984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.849388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.849403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.849908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.849923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.850394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.850410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.850940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.850956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.851483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.851500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.852071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.852088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.852489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.852505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.852960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.852976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-07-26 14:08:21.853420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-07-26 14:08:21.853434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.853818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.853832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.854277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.854291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.854824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.854839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.855287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.855302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.855769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.855783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.856315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.856329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.856797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.856812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.857320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.857335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.857789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.857803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.858336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.858351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.858870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.858885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.859417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.859431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.859889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.859903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.860443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.860458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.860983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.860997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.861441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.861456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.861993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.862007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.862416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.862432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.862900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.862914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.863366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.863381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.863907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.863921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.864427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.864442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.864976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.864990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.865379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.865393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.865835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.865849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.866374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.866388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.866896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.866910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.867438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.867453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.867927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.867941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.868415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.868431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.868875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-07-26 14:08:21.868898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-07-26 14:08:21.869340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.869355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.869797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.869811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.870316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.870331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.870884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.870898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.871423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.871437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.871981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.871995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.872565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.872579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.873104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.873119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.873626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.873640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.873867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.873881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.874402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.874416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.874886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.874900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.875378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.875393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.875847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.875860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.876017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.876030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.876259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.876274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.876801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.876815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.877213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.877229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.877687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.877701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.878180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.878196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.878726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.878739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.879183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.879197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.879731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.879744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.880169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.880183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.880652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.880666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.881153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.881193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.881723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.881739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.882302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.882319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.882874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.882888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.883348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.883362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.883895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.883909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.884369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.884385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.884899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.884913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.885445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.885461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.885917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.885931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.886434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.886449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.886839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.886852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.483 qpair failed and we were unable to recover it. 00:26:54.483 [2024-07-26 14:08:21.887358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.483 [2024-07-26 14:08:21.887372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.887881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.887895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.888362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.888377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.888931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.888945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.889395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.889410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.889945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.889961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.890426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.890443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.890903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.890919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.891435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.891451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.891960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.891976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.892485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.892501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.892959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.892974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.893452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.893468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.484 [2024-07-26 14:08:21.893934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.484 [2024-07-26 14:08:21.893950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.484 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.894453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.894470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.894883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.894903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.895348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.895364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.895875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.895891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.896301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.896318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.896795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.896810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.897307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.897802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.897817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.898208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.898224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.898695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.898710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.899214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.899230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.899754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.899769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.900322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.900338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.900891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.900904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.901382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.901396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.901910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.901923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.902374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.902390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.902934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.902948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.903464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.903478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.903983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.903999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.904536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.904550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.905055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.905069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.905468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.905480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.906009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.906022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.906444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.906458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.907007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.907020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.907564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.907577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.908052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-07-26 14:08:21.908066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-07-26 14:08:21.908573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.908589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.908988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.909001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.909452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.909467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.909997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.910011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.910522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.910536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.910999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.911012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.911472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.911486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.911958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.911972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.912504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.912519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.913065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.913079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.913590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.913603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.913872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.913885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.914391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.914405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.914868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.914881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.915388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.915402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.915849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.915862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.916338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.916353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.916857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.916871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.917387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.917401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.917849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.917863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.918340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.918355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.918829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.918843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.919247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.919261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.919712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.919725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.920183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.920197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.920726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.920739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.921130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.921145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.921620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.921636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.921835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.921849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.922303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.922316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.922823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.922836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.923349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.923363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.923886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.923900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.924382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.924396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.924934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.924947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.925498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.925512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.926062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.926076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.926575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-07-26 14:08:21.926588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-07-26 14:08:21.927161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.927174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.927639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.927652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.928131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.928145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.928653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.928667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.929235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.929249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.929760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.929774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.930317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.930332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.930855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.930868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.931380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.931394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.931946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.931960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.932533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.932548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.933115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.933129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.933611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.933625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.934133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.934147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.934721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.934735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.935187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.935201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.935763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.935777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.936337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.936352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.936904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.936918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.937292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.937307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.937838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.937851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.938324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.938342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.938879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.938893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.939465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.939480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.939982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.939996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.940482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.940497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.940797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.754 [2024-07-26 14:08:21.940834] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.754 [2024-07-26 14:08:21.940845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.754 [2024-07-26 14:08:21.940855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.754 [2024-07-26 14:08:21.940862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.754 [2024-07-26 14:08:21.941032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.941052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.940988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:54.754 [2024-07-26 14:08:21.941095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:54.754 [2024-07-26 14:08:21.941203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:54.754 [2024-07-26 14:08:21.941203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:54.754 [2024-07-26 14:08:21.941543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.941557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.942019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.942033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.942588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.942602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.943155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.943170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.943695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.943709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.944112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.944126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.944579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.944593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-07-26 14:08:21.945070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-07-26 14:08:21.945084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.945544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.945557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.946117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.946131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.946712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.946726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.947284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.947298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.947833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.947847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.948403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.948420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.948973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.948987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.949500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.949515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.950060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.950074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.950598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.950613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.951191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.951205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.951741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.951757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.952240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.952254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.952760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.952773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.953270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.953285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.953837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.953852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.954429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.954444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.954981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.954996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.955547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.955562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.956049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.956065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.956570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.956585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.957154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.957170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.957641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.957656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.958157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.958173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.958749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.958765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.959252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.959268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.959751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.959767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.960322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.960339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.960920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.960935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.961502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.961517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.962062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.962077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.962587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.962602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.963088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.963103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.963653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.963667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.964241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.964255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.964813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.964828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.965345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.965360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-07-26 14:08:21.965881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-07-26 14:08:21.965896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.966474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.966489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.967023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.967037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.967612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.967626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.968083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.968098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.968621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.968635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.969180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.969194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.969708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.969722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.970305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.970319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.970801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.970816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.971345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.971360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.971845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.971859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.972390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.972405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.972913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.972927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.973448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.973464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.974041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.974065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.974631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.974646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.975170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.975185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.975732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.975747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.976281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.976298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.976876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.976893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.977467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.977483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.978011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.978026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.978564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.978578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.979135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.979150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.979671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.980140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.980155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.980683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.980697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.981364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.981378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.981926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.981940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.982415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.982430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-07-26 14:08:21.982942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-07-26 14:08:21.982956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.983433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.983447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.983914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.983927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.984451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.984465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.984992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.985006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.985412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.985429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.986200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.986215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.986686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.986700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.987161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.987175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.987650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.987665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.988218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.988232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.988823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.988858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.989431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.989454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.990001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.990025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.990572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.990586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.991046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.991061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.991634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.991648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.992186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.992200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.992654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.992667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.993178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.993193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.993738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.993751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.994200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.994214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.994750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.995328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.995342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.995829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.995843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.996368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.996383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.996954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.996968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.997531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.997545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.998119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.998134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.998687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.998700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.999249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:21.999795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:21.999809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.000367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:22.000383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.000851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:22.000865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.001382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:22.001397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.002144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:22.002160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.002717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:22.002731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.003275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-07-26 14:08:22.003289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-07-26 14:08:22.003794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.003808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.004325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.004340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.004803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.004817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.005371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.005387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.005857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.005870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.006412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.006427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.006957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.006970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.007497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.007511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.008088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.008102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.008664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.008678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.009130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.009144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.009652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.009666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.010235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.010248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.010770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.010784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.011246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.011260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.011809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.011822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.012375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.012389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.012933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.012947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.013392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.013405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.013869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.013882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.014453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.014467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.015020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.015034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.015592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.015607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.016117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.016132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.016591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.016604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.017124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.017139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.017716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.017729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.018235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.018249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.018736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.018750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.019306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.019320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.019890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.019903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.020465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.020478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.021025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.021039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.021582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.021596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.022070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.022084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.022589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.022632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.023110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.023129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.023667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-07-26 14:08:22.023681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-07-26 14:08:22.024163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.024178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.024699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.024713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.025302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.025316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.025782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.025796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.026319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.026334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.026878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.026892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.027445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.027459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.027991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.028005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.028533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.028547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.029016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.029574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.029593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.030146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.030160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.030694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.030707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.031231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.031245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.031823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.031836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.032376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.032391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.032838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.032851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.033394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.033408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.033983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.033997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.034583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.034598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.035356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.035371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.035919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.035932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.036415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.036429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.036933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.036947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.037473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.037487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.037928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.037941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.038472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.038487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.039021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.039035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.039505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.039519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.040060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.040075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.040634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.040647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.041218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.041232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.041638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.041652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.042407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.042422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.043185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.043201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.043729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.043743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.044251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.044265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-07-26 14:08:22.044735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-07-26 14:08:22.044751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.045272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.045288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.045823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.045836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.046360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.046374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.046854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.046867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.047402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.047417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.047970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.047984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.048494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.048509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.048994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.049008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.049534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.049548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.050105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.050120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.050672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.050685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.051262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.051276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.051832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.051846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.052305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.052319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.052848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.052861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.053316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.053331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.053784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.053797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.054306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.054321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.054780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.054793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.055299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.055314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.055866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.055879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.056440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.056454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.056954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.056967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.057490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.057505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.058014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.058028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.058550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.058564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.059021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.059037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.059558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.059571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.060104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.060118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.060641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.060655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.061247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.061261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.061835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.061849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.062404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.062418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.062992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.063006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.063568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.063582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.064086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.064101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.064629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-07-26 14:08:22.064642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-07-26 14:08:22.065123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.065138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.065668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.065681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.066144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.066157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.066693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.066706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.067261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.067275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.067798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.067811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.068315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.068328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.068787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.068800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.069325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.069339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.069872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.069885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.070404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.070418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.070991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.071004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.071511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.071526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.072006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.072019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.072622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.072636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.073095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.073109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.073665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.073682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.074155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.074168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.074696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.074709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.075281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.075838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.075852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.076372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.076385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.076958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.076972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.077504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.077517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.078074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.078088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.078640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.078654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.079181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.079753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.079767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.080324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.080339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.080858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.080871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.081457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.081472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.082002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.082016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-07-26 14:08:22.082547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-07-26 14:08:22.082561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.083112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.083126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.083604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.083618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.084153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.084167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.084708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.084722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.085203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.085217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.085789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.085802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.086336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.086350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.086873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.086887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.087463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.087477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.088000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.088013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.088534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.088551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.089098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.089113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.089655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.089668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.090247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.090261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.090788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.090801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.091329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.091343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.091797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.091811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.092351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.092365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.092843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.092856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.093393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.093407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.093965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.093979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.094502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.094516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.095041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.095061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.095621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.095635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.096192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.096206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.096615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.096629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.097134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.097148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.097657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.097671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.098128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.098142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.098667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.098680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.099201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.099214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.099748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.099762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.100317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.100331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.100814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.100828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.101374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.101389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.101887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.101901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-07-26 14:08:22.102408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-07-26 14:08:22.102422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.102968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.102982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.103514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.103528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.104021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.104035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.104496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.104509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.104963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.104976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.105511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.105525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.106077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.106091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.106596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.106612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.107148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.107163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.107614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.107628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.108075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.108091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.108566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.108579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.109032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.109049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.109617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.109630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.110166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.110184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.110723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.110736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.111190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.111204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.111730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.111744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.112250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.112264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.112786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.112800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.113313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.113326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.113783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.113797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.114333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.114348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.114821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.114834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.115328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.115341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.115803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.115816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.116324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.116338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.116852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.116865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.117388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.117402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.117839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.118391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.118406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.118865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.118879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.119328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.119342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.119836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.119849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.120292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.120306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.120765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.120778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.121217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.121231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-07-26 14:08:22.121671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-07-26 14:08:22.121684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.122153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.122167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.122623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.122637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.123139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.123153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.123629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.123645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.124157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.124171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.124623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.124637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.125182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.125196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.125702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.125715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.126240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.126254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.126765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.126779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.127303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.127317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.127840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.127854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.128371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.128386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.129120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.129136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.129665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.129679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.130232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.130247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.130696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.130710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.131231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.131245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.131733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.131748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.132307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.132321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.132851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.132865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.133392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.133405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.133862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.133875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.134334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.134348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.134839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.134853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.135409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.135424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.135963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.135977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.136468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.136482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.137055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.137069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.137624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.137638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.138159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.138177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.138675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.138688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.139146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.139161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.139635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.139648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.140198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.140212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.140756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.140772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.141233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.141248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-07-26 14:08:22.141819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-07-26 14:08:22.141834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.142303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.142318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.142757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.142771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.143296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.143310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.143827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.143841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.144368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.144383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.144795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.144808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.145269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.145284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.145830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.145844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.146367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.146382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.146909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.146922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.147437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.147452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.147906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.147920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.148443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.148457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.149006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.149020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.149609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.149623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.150206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.150220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.151003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.151018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.151573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.151587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.152085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.152101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.152652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.152666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.153133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.153146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.153662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.153676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.154215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.154230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.154690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.154704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.155245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.155259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.155700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.155714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.156172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.156187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.156643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.156656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.157171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.157184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.157722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.157735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.158247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.158261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.158794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.158807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.159284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.159297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.159693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.159707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.160161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.160175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.160577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.160591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.161063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.161078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-07-26 14:08:22.161492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-07-26 14:08:22.161506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.161903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.161917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.162478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.162493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.163027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.163040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.163519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.163533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.164054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.164068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.164481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.164494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.165038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.165055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.165464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.165478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.165997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.166011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.166498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.166513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.167242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.167257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.167672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.167685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.168191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.168205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.168682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.168700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.169224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.169237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.169701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.169714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.170111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.170125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.170636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.170649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.171107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.171121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.171641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.171654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.172112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.172126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.172642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.172656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.173236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.173779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.173792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.174341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.174356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.174772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.174786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.175317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.175331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.175875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.175888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.176420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.176436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.176926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.176939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.177399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.177412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.177824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-07-26 14:08:22.177837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-07-26 14:08:22.178316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-07-26 14:08:22.178330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-07-26 14:08:22.178789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-07-26 14:08:22.178803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-07-26 14:08:22.179342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-07-26 14:08:22.179356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.179926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.179941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.180482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.180497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.180997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.181011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.181427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.181441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.181885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.181899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.182476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.182490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.183020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.183033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.183512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.183526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.184092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.184107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.184583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.184596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.185148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.185162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.185606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.185619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.186165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.186179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.186739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.186752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.187296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.187313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.187718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.187732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.188257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.188271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.188778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.188791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.189330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.189344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.189900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-26 14:08:22.189913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-26 14:08:22.190438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.190452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.190864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.190878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.191351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.191365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.191821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.191834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.192314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.192329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.192785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.193288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.193302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.193761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.193774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.194250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.194264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.194661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.194675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.195180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.195194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.195719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.195731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.196150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.196165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.196643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.196657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.197124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.197139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.197596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.197610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.198139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.198153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.198614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.198627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.199207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.199221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.199694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.199707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.200250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.200264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.200726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.200743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.201248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.201262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.201743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.201757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.202211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.202225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-26 14:08:22.202682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-26 14:08:22.202696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.203266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.203280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.203790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.203803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.204304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.204320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.204729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.204743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.205228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.205242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.206022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.206036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.206589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.206604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.207143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.207158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.207637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.207650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.208133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.208167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.208705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.208720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.209195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.209212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.209671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.209685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.210219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.210234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.210745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.210759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.211239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.211255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.211717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.211731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.212328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.212343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.212891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.212905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.213446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.213462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.214015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.214030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.214804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.214820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.215298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.215317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.215736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.215751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.216257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.216272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-26 14:08:22.216754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-26 14:08:22.216768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.217406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.217422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.217935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.217950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.218499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.218513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.219414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.219429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.219844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.219858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.220383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.220398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.220807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.220821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.221280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.221295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.221704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.221718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.222235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.222251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.222790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.222805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.223339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.223354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.223865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.223879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.224463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.224477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.225035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.225056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.225473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.225487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.225907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.225920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.226409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.226424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.226986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.227000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.227488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.227503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.227955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.227969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.228351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.228366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.229058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.229074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d0000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.229541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.229559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.229956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-26 14:08:22.229970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-26 14:08:22.230466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.230482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.231154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.231170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.231705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.231719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.232256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.232272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.232634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.232647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.233190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.233204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.233683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.233697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.234199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.234213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.234618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.234631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.235122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.235136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.235535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.235549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.235976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.235991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.236495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.236509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.236913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.236927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.237357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.237372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.237840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.237853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.238439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.238454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.238914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.238928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.239437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.239452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.239953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.239967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.240449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.240464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.240861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.240875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.241334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.241349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.241809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.241822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.242365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.242380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.242792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.242809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-26 14:08:22.243285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-26 14:08:22.243299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.243720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.243734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.244264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.244280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.244736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.244750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.245279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.245293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.245693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.245706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.246162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.246177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.246637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.246650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.247217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.247232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.247749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.247766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.248249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.248277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.248756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.248776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.249297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.249312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.249778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.249792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.250353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.250368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.250768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.250781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.251281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.251296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.251708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.251722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.252255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.252269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.252678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.252691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.253244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.253259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.253814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.253828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.254335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.254350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.254756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.254770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.255254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.255270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.255730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-26 14:08:22.255744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-26 14:08:22.256265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.256283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.256712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.256725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.257428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.257443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.257851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.257865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.258549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.258564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.259057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.259073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.259591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.259606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.260022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.260035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.260511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.260526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.261102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.261117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.261524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.261537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.262056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.262070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.262468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.262482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.262888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.262901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.263409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.263423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.263902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.263915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.264494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.264508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.264915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.264928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.265619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.265634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.266191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.266206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.266695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.266709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-07-26 14:08:22.267165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-07-26 14:08:22.267179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.267653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.267666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.268173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.268187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.268603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.268617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.269187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.269201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.269682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.269696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.270105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.270124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.270587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.270601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.271311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.271325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.271858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.271871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.272419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.272433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.272862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.272875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.273372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.273386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.277069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.277651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.277666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.278118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.278133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.278543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.278556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.278965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.278978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.279478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.279492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.279978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.279992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.280445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.280459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.280981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.280995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.281498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.281513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.281984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-07-26 14:08:22.281998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-07-26 14:08:22.282465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.282479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.282933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.282947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.283490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.283504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.283998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.284012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.284494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.284508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.284907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.284921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.285368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.285382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.285843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.285857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.286331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.286345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.286863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.286876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.287410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.287425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.287834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.287847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.288374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.288387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.288848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.288861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.289318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.289332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.289816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.289829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.290300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.290315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.290859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.290873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.291447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.291460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.291995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.292009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.292693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.292707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.293268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.293283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.293704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.293718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f2f30 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.294269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.294296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.294747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.294759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.295335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.295345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-07-26 14:08:22.295741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-07-26 14:08:22.295751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.296199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.296211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.296619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.296629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.297155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.297165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.297645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.297655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.298205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.298216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.298693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.298703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.299249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.299259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.299665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.299675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.300082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.300093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.300492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.300502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.300958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.300968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.301362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.301883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.301893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.302321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.302331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.302761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.302771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.303205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.303216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.303605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.303614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.304064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.304075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.304516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.304526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.304977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.304988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.305451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.305460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.305905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.305914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.306549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.306560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.307029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.307039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.307435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.307445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-07-26 14:08:22.307899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-07-26 14:08:22.307908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.308381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.308391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.308864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.308874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.309328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.309339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.309787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.309797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.310243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.310253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.310798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.310808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.311308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.311319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.311711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.311721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.312376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.312386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.312771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.312781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.313227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.313239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.313705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.313715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.314242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.314253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.314627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.314637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.315077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.315088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.315460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.315470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.315935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.315945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.316400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.316411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.316793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.316802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.317199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.317209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.317607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.317616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.318069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.318080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.318461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.318470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.318853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.318862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.319304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.319315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-07-26 14:08:22.319767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-07-26 14:08:22.319778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.320166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.320176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.320623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.320633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.321003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.321013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.321465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.321475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.321882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.321892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.322347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.322357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.322825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.322835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.323355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.323367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.323742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.323752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.324205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.324215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.324371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.324381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.324766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.324775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.325211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.325221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.325590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.325600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.325932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.325942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.326396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.326407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.326803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.326813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.327316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.327327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.327777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.327787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.328239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.328249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.328740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.328750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.328914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.328923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.329319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.329329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.329707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.329717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.330102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.330113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-07-26 14:08:22.330481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-07-26 14:08:22.330491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.330999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.331008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.331407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.331418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.331802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.331812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.332314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.332324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.332711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.332720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.333106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.333116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.333499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.333509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.333959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.333969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.334357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.334367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.334870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.334880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.335330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.335341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.335516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.335525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.336053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.336064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.336435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.336445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.336992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.337001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.337387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.337398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.337783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.337793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.338255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.338265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.338710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.338720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.339163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.339174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.339547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.339556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.340013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.340023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.340479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.340489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.340872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.340882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.341270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.341280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.341726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-07-26 14:08:22.341736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-07-26 14:08:22.342141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.342151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.342590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.342600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.343052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.343065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.343525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.343535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.343994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.344004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.344414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.344425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.344926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.344936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.345436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.345446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.345836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.345845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.346291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.346301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.346805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.346814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.347199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.347210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.347639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.347651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.348277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.348288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.348730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.348740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.349135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.349145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.349579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.349588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.349974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.349984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.350500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.350510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-07-26 14:08:22.350963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-07-26 14:08:22.350973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.351353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.351364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.351808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.351818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.352322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.352333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.352722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.352735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.353124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.353134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.353526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.353536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.353934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.353944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.354386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.354398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.354776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.354786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.355285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.355296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.355671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.355681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.356128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.356139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.356534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.356544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.356946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.356956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.357408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.357418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.357796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.357806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.358317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.358327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.358780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.358789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.359251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.359263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.359714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.359724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.360197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.360208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.360518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.360528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.360986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.360996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.361439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.361449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.361897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.361906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-07-26 14:08:22.362310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-07-26 14:08:22.362321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.362781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.362791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.363175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.363186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.363683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.363693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.364157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.364167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.364566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.364576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.365028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.365037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.365546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.365558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.366059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.366070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.366521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.366530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.366932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.366942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.367414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.367424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.367800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.367810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.368327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.368338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.368856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.368866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.369328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.369339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.369778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.369787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.370283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.370294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.370756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.370766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.370928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.370937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.371391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.371401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.371844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.371854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.372296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.372307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.372737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.372747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.373243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.373254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.373658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.373668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-07-26 14:08:22.374052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-07-26 14:08:22.374062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.374579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.374589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.374965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.374975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.375419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.375430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.375834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.375844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.376241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.376252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.376648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.376657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.377199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.377210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.377612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.377622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.378002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.378011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.378456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.378466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.378915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.378925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.379399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.379409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.379865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.379874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.380322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.380332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.380834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.380844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.381225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.381236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.381754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.381764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.382068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.382078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.382462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.382472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.382869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.382879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.383381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.383393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.383779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.383788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.384169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.384188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.384572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.384582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.385092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.385101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.385545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-07-26 14:08:22.385555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-07-26 14:08:22.386013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.386023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.386556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.386566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.387016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.387025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.387474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.387484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.387988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.387998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.388389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.388400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.388837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.388846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.389298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.389308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.389744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.389754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.390209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.390220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.390669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.390679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.391055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.391065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.391440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.391450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.391925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.391935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.392432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.392443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.392918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.392928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.393427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.393437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.393836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.393846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.394341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.394351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.394802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.394813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.395511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.395522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.395905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.395915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.396377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.396388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.396836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.396846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.397316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.397327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.397767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-07-26 14:08:22.397777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-07-26 14:08:22.398165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.398176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.398673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.398683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.399114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.399125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.399575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.399585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.399960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.399970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.400406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.400418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.400899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.400910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.401437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.401447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.401891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.401903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.402373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.402383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.402823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.402833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.403229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.403240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.403627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.403636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.404078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.404090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.404475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.404485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.404946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.404956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.405397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.405408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.405818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.405828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.406325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.406335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.406727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.406738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.407254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.407265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.407713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.407723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.408111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.408124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.408596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.408607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.408844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.408854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.409309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.409319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-07-26 14:08:22.409775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-07-26 14:08:22.409785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.410372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.410383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.410827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.410837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.411430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.411441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.411821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.411831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.412242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.412253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.412702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.412713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.413483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.413495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.413883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.413892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.414285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.414295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.414790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.414800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.415246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.415257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.415657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.415667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.416052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.416063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.416444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.416453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.416952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.416962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.417406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.417418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.417880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.417890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.418390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.418401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.418906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.418916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.419478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.419489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.419940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.419950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.420355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.420369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.420815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.420825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.421226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.421237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.421670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.421680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-07-26 14:08:22.422118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-07-26 14:08:22.422129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.422518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.422528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.422978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.422990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.423439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.423450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.423951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.423961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.424427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.424437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.424824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.424834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.425292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.425303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.425698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.425708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.426282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.426293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.426737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.426747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.427156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.427166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.427552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.427562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.427722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.427732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.428192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.428202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.428702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.428712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.429101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.429569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.429579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.430033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.430049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.430420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.430430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-07-26 14:08:22.430684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-07-26 14:08:22.430694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.431232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.431242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.431635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.431645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.432032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.432046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.432713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.432723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.433222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.433233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.433685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.433695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.434130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.434140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.434514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.434524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.434959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.434969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.435140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.435150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.435666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.435676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.435900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.435910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.436368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.436379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.436823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.436833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.437237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.437249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.437693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.437705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.438129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.438140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.438582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.438592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.439027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.439037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.439431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.439441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.439936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.439946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.440345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.440356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.440795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.440806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.441194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.441205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.441673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.441683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-07-26 14:08:22.442154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-07-26 14:08:22.442165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.442615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.442626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.443013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.443022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.443399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.443409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.443875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.443886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.444338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.444349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.444796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.444806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.445278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.445289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.445723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.445732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.446116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.446127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.446624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.446634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.447015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.447025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.447465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.447475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.447915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.447926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.448303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.448313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.448751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.448760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.448983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.448992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.449390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.449402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.449772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.449782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.450303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.450313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.450465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.450474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.450898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.450909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.451378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.451389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.451769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.451779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.452237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.452247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.452773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.452783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.453177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.453189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.453574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-07-26 14:08:22.453583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-07-26 14:08:22.454013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.454023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.454412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.454422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.454871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.454882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.455385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.455396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.455792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.455802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.456196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.456206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.456584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.456593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.457037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.457446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.457857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.457867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.458342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.458353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.458804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.458814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.459203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.459214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.459599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.459609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.460053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.460063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.460441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.460451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.460833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.460843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.461230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.461241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.461678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.461687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.462069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.462080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.462452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.462462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-07-26 14:08:22.462839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-07-26 14:08:22.462849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-26 14:08:22.463233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-26 14:08:22.463245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-26 14:08:22.463711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-26 14:08:22.463721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-26 14:08:22.464117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.464128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.464539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.464550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.465015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.465025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.465431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.465442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.465822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.465832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.466084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.466118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.466576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.466592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.467007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.467022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.467473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.467487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.467897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.467911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.468374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.468388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.468765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.468778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.469181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.469195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.469594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.469607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.470115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.470130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.470588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.470602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.470994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.471008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.471446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.471461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.471968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.471987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.472224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.472736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.472749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.473193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.473208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.473658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.473672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.474059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.474073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.474524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.474538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.474937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.474951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.475410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.475425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.475817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.475830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.476289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.476303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.476688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.476701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.477210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.477225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.477670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.477684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.478076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.478091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.478544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.478558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.478945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.478958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.479400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.479414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-26 14:08:22.479809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-26 14:08:22.479822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.480265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.480279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.480786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.480800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.481232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.481246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.481765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.481779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.482029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.482046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.482496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.482511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.482901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.482915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.483308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.483322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.483716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.483733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.484122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.484133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.484533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.484543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.484925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.484935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.485399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.485410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.485781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.485791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.486240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.486251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.486687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.486698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.487150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.487161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.487605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.487614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.488000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.488010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.488282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.488293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.488726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.488737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.489130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.489145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.489532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.489542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.489990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.490000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.490444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.490455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.490964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.490974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.491439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.491449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.491948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.491958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.492409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.492419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.492815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.492825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.493210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.493221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.493608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.493618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.494093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.494104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.494491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.494501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.494945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.494955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.495399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.495410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.495800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-26 14:08:22.495810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-26 14:08:22.496252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.496261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.496703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.496712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.497151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.497163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.497546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.497556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.497987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.497997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.498400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.498411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.498843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.498852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.499323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.499334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.499789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.499799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.500202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.500212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.500593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.500603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.501060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.501072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.501456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.501466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.501944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.501954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.502414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.502425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.502872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.502882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.503337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.503348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.503742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.503752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.504155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.504165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.504561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.504571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.504945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.504955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.505422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.505433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.505930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.505940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.506335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.506346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.506773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.506786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.507183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.507194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.507648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.507659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.508050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.508062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.508527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.508537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.508984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.508994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.509442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.509453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.509888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.509899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.510339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.510350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.510787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.510797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.511248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.511259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.511781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.511792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.512174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-26 14:08:22.512184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-26 14:08:22.512631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.512641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.513046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.513059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.513506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.513517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.513959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.513969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.514470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.514481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.514874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.514884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.515133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.515143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.515539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.515549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.515979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.515989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.516371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.516382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.516701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.516712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.517152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.517163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.517550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.517560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.518012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.518022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.518544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.518566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.518966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.518980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.519384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.519399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.519825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.519839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.520315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.520329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.520855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.520869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.521347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.521362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.521812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.521826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.522231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.522246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.522626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.522640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.523291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.523306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.523716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.523730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.524009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.524023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.524483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.524501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.524900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.524914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.525368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.525382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.525827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.525841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.526304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.526319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.526713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.526727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.527199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.527214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.527611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.527626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.528039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.528057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.528436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.528451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-26 14:08:22.529036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-26 14:08:22.529057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.529535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.529549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.529988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.530002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.530445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.530460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.530871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.530886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.531280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.531295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.531685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.531700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.532210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.532225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.532683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.532698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.533110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.533124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.533570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.533584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.534052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.534066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.534412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.534426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.534871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.534886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.535340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.535355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.535766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.535780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.536181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.536195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.536641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.536658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.537045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.537060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.537580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.537594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.538053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.538068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.538476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.538490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.538950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.538964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.539362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.539377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.539772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.539786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.540167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.540619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.540633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.540894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.540907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.541597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-26 14:08:22.541611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-26 14:08:22.542074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.542089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.542490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.542503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.542959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.542973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.543426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.543440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.543973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.543987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.544363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.544377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.544777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.544791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.545256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.545271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.545735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.545748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.546152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.546167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.546561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.546575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.546967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.546981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.547371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.547386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.547847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.547860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.548317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.548331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.548788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.548802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.549252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.549267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.549732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.549746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.550210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.550224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.550661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.550675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.551124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.551139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.551595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.551609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.551988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.552002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.552452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.552467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.552994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.553008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.553460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.553473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.553922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.553936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.554442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.554456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.554965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.554982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.555443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.555458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.555962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.555976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.556485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.556500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.556958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.556972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.557527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.557542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.557989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.558002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.558464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.558478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.558951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.558965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-26 14:08:22.559472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-26 14:08:22.559486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.559895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.559908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.560361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.560375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.560884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.560898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.561366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.561381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.561826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.561841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.562315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.562330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.562788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.562801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.563326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.563341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.563810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.563824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.564324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.564338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.564867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.564880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.565358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.565372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.565599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.565777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.565791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.566233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.566248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.566697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.566711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.567156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.567172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.567688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.567703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.568159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.568173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.568559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.568572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.569053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.569068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.569524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.569537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.570048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.570063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.570521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.570534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.570920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.570933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.571464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.571478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.571930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.571944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.572403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.572417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.572889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.572903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.573460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.573475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.573981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.573997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.574385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.574399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.574905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.574918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.575447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.575461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.575921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.575934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.576374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.576388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.576839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-26 14:08:22.576853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-26 14:08:22.577304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.577317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.577720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.577734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.578238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.578252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.578780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.578793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.579253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.579268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.579819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.579833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.580347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.580360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.580876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.580889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.581420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.581434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.581962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.581975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.582446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.582460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.582936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.582950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.583466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.583480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.583812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.583825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.584282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.584296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.584825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.584839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.585388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.585402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.585911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.585925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.586421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.586435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.586959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.586972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.587535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.587549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.588049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.588062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.588574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.588588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.589075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.589089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.589571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.589584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.590138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.590152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.590732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.590746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.591228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.591774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.591787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.592375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.592396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.592946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.592960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.593471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.593485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.594072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.594086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.594611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.594628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.595135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.595149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.595725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.595738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.596247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-26 14:08:22.596260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-26 14:08:22.596784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.596798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.597313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.597327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.597865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.597878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.598403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.598417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.598998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.599012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.599545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.599559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.599998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.600011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.600461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.600475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.600981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.600995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.601565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.601579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.602069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.602083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.336 [2024-07-26 14:08:22.602537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.602553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:55.336 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.336 [2024-07-26 14:08:22.603063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.603079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.336 [2024-07-26 14:08:22.603556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.603571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.604082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.604096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.604659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.604673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.605210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.605224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.605680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.605695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.606160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.606174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.606656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.606671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.607147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.607163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.607431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.607445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.607954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.607968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.608533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.608548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.608948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.608962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.609418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.609432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.609935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.609948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.610400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.610414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.610865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.610878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.611330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.611344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.611815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.611828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.612286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.612301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.612812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.612826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.613275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.613289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.613690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.613707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-26 14:08:22.614174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-26 14:08:22.614189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.614750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.614764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.615338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.615352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.615837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.615850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.616424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.616439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.616951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.616965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.617499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.617514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.618099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.618113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.618587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.618600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.619160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.619175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.619580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.619594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.620101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.620116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.620464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.620478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.620946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.620961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.621536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.621550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.622105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.622121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.622607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.622621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.623027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.623040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.623531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.623545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.623952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.623965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.624452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.624466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.624928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.624942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.625390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.625404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.625859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.625872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.626401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.626414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.626833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.626848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.627383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.627397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.627884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.627899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.628386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.628402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.628818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.628832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.629360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.629374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.629837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-26 14:08:22.629850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-26 14:08:22.630391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.630405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.630916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.630931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.631394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.631408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.631820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.631834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.632366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.632381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.632784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.632798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.633319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.633333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.633795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.633812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.634292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.634307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.634786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.634800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.635289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.635304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.635765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.635778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.636326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.636340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.636767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.636782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.637570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.637585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.638158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.638172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.638581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.638595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.338 [2024-07-26 14:08:22.639177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.639194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.338 [2024-07-26 14:08:22.639677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.639693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.338 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.338 [2024-07-26 14:08:22.640148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.640164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.640619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.640634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.641184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.641199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.641684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.641698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.642153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.642167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.642562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.642575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.643115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.643129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.643588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.643602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.644095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.644109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.644571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.644584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.645165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.645179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.645594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.645608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.646137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.646151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.646563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.646577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.647066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.647080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.647527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.647541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-26 14:08:22.648117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-26 14:08:22.648133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.648640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.648655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.649185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.649201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.649737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.649752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.650212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.650228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.650710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.650725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.651243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.651259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.651767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.651782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.652243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.652259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.652834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.652850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.653379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.653400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.653927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.653944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.654472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.654488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.654948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.654963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.655524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.655541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.656139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.656155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.656693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.656708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.657183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.657197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.657730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.657744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 Malloc0 00:26:55.339 [2024-07-26 14:08:22.658332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.658345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.658894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.658908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.339 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:55.339 [2024-07-26 14:08:22.659421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.659436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.339 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.339 [2024-07-26 14:08:22.659942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.659957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.660491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.660505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.661057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.661071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.661613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.661626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.662191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.662205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.662732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.662745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.663305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.663319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.663821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.663835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.664394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.664409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.664978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.664991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.665552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.665565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.665774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.339 [2024-07-26 14:08:22.666096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.666111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.666666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.666680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.667215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-26 14:08:22.667229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-26 14:08:22.667766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.667780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.668236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.668250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.668773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.668788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.669310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.669324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.669780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.669793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.670297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.670311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.670807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.670820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.671368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.671382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.671936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.671950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.672481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.672495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.672976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.672989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.673459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.673473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.673983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.673999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.340 [2024-07-26 14:08:22.674528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.674542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.340 [2024-07-26 14:08:22.675061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.675075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.340 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.340 [2024-07-26 14:08:22.675657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.675671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.676201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.676215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.676786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.676799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.677343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.677357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.677874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.677887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.678476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.678489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.678936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.678949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.679409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.679423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.679903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.679916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.680476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.680490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.681020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.681034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.681561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.681575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.682134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.682148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.682636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.682649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.683193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.683207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.683762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.683775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.684293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.684308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.684830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.684843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.685376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.685390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-26 14:08:22.685919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-26 14:08:22.685932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.340 [2024-07-26 14:08:22.686490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.686504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.341 [2024-07-26 14:08:22.687009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.687024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.341 [2024-07-26 14:08:22.687570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.687585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.688153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.688167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.688721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.688734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.689239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.689254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.689758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.689772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.690338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.690352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.690804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.690818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.691326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.691340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.691896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.691910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.692462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.692476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.692983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.692997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.693476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.693492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.694051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.694065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.341 [2024-07-26 14:08:22.694630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.694645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.341 [2024-07-26 14:08:22.695199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.695213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.341 [2024-07-26 14:08:22.695731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.695745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.696338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.696353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.696898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.696912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.697439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.697453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.698012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-26 14:08:22.698017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.341 [2024-07-26 14:08:22.698025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c0000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.341 [2024-07-26 14:08:22.706459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.341 [2024-07-26 14:08:22.706684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.341 [2024-07-26 14:08:22.706715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.341 [2024-07-26 14:08:22.706727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.341 [2024-07-26 14:08:22.706736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.341 [2024-07-26 14:08:22.706765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.341 14:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3117129 00:26:55.341 [2024-07-26 14:08:22.716442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.341 [2024-07-26 14:08:22.716606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.341 [2024-07-26 14:08:22.716625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.341 [2024-07-26 14:08:22.716633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.341 [2024-07-26 14:08:22.716640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.341 [2024-07-26 14:08:22.716660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-26 14:08:22.726472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.342 [2024-07-26 14:08:22.726633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.342 [2024-07-26 14:08:22.726652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.342 [2024-07-26 14:08:22.726660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.342 [2024-07-26 14:08:22.726666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.342 [2024-07-26 14:08:22.726684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-26 14:08:22.736372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.342 [2024-07-26 14:08:22.736531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.342 [2024-07-26 14:08:22.736549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.342 [2024-07-26 14:08:22.736558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.342 [2024-07-26 14:08:22.736564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.342 [2024-07-26 14:08:22.736583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-26 14:08:22.746408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.342 [2024-07-26 14:08:22.746564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.342 [2024-07-26 14:08:22.746585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.342 [2024-07-26 14:08:22.746592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.342 [2024-07-26 14:08:22.746600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.342 [2024-07-26 14:08:22.746618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.756399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.756551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.756570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.756578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.756584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.756603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.766479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.766628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.766646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.766654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.766661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.766678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.776522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.776685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.776703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.776711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.776717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.776734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.786522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.786675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.786692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.786700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.786707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.786730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.796545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.796698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.796716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.796724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.796731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.796749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.806592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.806766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.806784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.806791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.806798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.806815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.816623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.816774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.816792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.816799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.816806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.816823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.826623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.826779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.826796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.826803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.826810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.826828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.836686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.836836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.836856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.836864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.836871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.836888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.846712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.846885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.846903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.604 [2024-07-26 14:08:22.846910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.604 [2024-07-26 14:08:22.846916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.604 [2024-07-26 14:08:22.846934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.604 qpair failed and we were unable to recover it. 00:26:55.604 [2024-07-26 14:08:22.856727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.604 [2024-07-26 14:08:22.856885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.604 [2024-07-26 14:08:22.856903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.856910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.856916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.856933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.866765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.866926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.866944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.866952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.866959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.866976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.876801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.876970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.876987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.876994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.877001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.877021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.886838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.886990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.887008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.887016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.887023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.887040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.896846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.896998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.897016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.897023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.897030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.897053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.906863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.907021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.907038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.907050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.907057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.907074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.916916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.917077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.917096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.917103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.917109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.917127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.926915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.927077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.927095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.927102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.927109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.927127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.936954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.937115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.937134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.937141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.937147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.937164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.947141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.947530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.947547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.947554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.947561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.947578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.957110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.957271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.957289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.957297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.957304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.957321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.967123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.967273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.967291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.967299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.967309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.967326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.977134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.977290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.977307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.605 [2024-07-26 14:08:22.977315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.605 [2024-07-26 14:08:22.977322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.605 [2024-07-26 14:08:22.977340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.605 qpair failed and we were unable to recover it. 00:26:55.605 [2024-07-26 14:08:22.987120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.605 [2024-07-26 14:08:22.987273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.605 [2024-07-26 14:08:22.987290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.606 [2024-07-26 14:08:22.987297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.606 [2024-07-26 14:08:22.987304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.606 [2024-07-26 14:08:22.987321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.606 qpair failed and we were unable to recover it. 00:26:55.606 [2024-07-26 14:08:22.997173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.606 [2024-07-26 14:08:22.997319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.606 [2024-07-26 14:08:22.997336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.606 [2024-07-26 14:08:22.997344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.606 [2024-07-26 14:08:22.997350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.606 [2024-07-26 14:08:22.997368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.606 qpair failed and we were unable to recover it. 00:26:55.606 [2024-07-26 14:08:23.007210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.606 [2024-07-26 14:08:23.007382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.606 [2024-07-26 14:08:23.007399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.606 [2024-07-26 14:08:23.007407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.606 [2024-07-26 14:08:23.007413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.606 [2024-07-26 14:08:23.007431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.606 qpair failed and we were unable to recover it. 00:26:55.606 [2024-07-26 14:08:23.017139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.606 [2024-07-26 14:08:23.017295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.606 [2024-07-26 14:08:23.017313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.606 [2024-07-26 14:08:23.017320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.606 [2024-07-26 14:08:23.017327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.606 [2024-07-26 14:08:23.017344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.606 qpair failed and we were unable to recover it. 00:26:55.606 [2024-07-26 14:08:23.027213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.606 [2024-07-26 14:08:23.027371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.606 [2024-07-26 14:08:23.027389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.606 [2024-07-26 14:08:23.027396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.606 [2024-07-26 14:08:23.027403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.606 [2024-07-26 14:08:23.027421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.606 qpair failed and we were unable to recover it. 00:26:55.606 [2024-07-26 14:08:23.037265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.606 [2024-07-26 14:08:23.037414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.606 [2024-07-26 14:08:23.037432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.606 [2024-07-26 14:08:23.037440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.606 [2024-07-26 14:08:23.037445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.606 [2024-07-26 14:08:23.037463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.606 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.047292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.047443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.047460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.047468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.047475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.047493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.057327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.057477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.057495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.057506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.057513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.057530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.067326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.067492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.067509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.067516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.067524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.067540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.077381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.077529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.077547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.077555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.077561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.077579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.087411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.087567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.087585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.087592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.087599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.087616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.097432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.097582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.097600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.097607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.097614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.097631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.107452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.107649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.107667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.107674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.107681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.107698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.117496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.117643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.117661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.117669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.117675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.117692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.127533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.127683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.127701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.127708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.127715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.127732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.137550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.137700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.137717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.137724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.137731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.137749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.147559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.147719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.147736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.147747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.147753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.147770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.157608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.157760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.157777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.157784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.157791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.157809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.167675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.167841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.167858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.167865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.167872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.167889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.177905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.178063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.178081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.178088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.178095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.178114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.187678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.187831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.187848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.187856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.187863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.187880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.197716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.197866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.197884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.197891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.197898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.197916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.207740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.207890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.207907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.207915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.207921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.207939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.217782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.217935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.217954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.217961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.217967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.217985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.227772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.227923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.868 [2024-07-26 14:08:23.227941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.868 [2024-07-26 14:08:23.227948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.868 [2024-07-26 14:08:23.227954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.868 [2024-07-26 14:08:23.227972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.868 qpair failed and we were unable to recover it. 00:26:55.868 [2024-07-26 14:08:23.237803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.868 [2024-07-26 14:08:23.237961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.237982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.237989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.237995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.238013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:55.869 [2024-07-26 14:08:23.247891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.869 [2024-07-26 14:08:23.248040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.248063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.248070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.248077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.248095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:55.869 [2024-07-26 14:08:23.257884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.869 [2024-07-26 14:08:23.258037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.258060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.258068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.258074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.258091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:55.869 [2024-07-26 14:08:23.267874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.869 [2024-07-26 14:08:23.268027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.268052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.268060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.268067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.268083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:55.869 [2024-07-26 14:08:23.277933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.869 [2024-07-26 14:08:23.278088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.278105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.278112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.278119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.278140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:55.869 [2024-07-26 14:08:23.287954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.869 [2024-07-26 14:08:23.288115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.288134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.288141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.288147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.288165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:55.869 [2024-07-26 14:08:23.297966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.869 [2024-07-26 14:08:23.298125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.869 [2024-07-26 14:08:23.298142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.869 [2024-07-26 14:08:23.298149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.869 [2024-07-26 14:08:23.298156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:55.869 [2024-07-26 14:08:23.298173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.869 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.307998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.308164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.308182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.308189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.308196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.308214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.318032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.318200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.318218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.318226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.318232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.318249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.328081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.328234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.328255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.328262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.328269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.328287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.338120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.338283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.338301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.338308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.338315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.338333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.348103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.348250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.348268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.348275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.348282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.348299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.358124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.358273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.358290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.358297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.358304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.358322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.368156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.368318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.368335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.368343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.368353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.368370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.378226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.378376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.129 [2024-07-26 14:08:23.378394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.129 [2024-07-26 14:08:23.378402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.129 [2024-07-26 14:08:23.378408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.129 [2024-07-26 14:08:23.378426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.129 qpair failed and we were unable to recover it. 00:26:56.129 [2024-07-26 14:08:23.388213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.129 [2024-07-26 14:08:23.388366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.388383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.388391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.388397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.388414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.398274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.398423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.398441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.398448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.398454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.398471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.408311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.408456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.408474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.408481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.408488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.408505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.418310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.418461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.418479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.418486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.418492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.418509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.428371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.428562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.428579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.428587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.428594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.428612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.438422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.438591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.438608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.438615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.438623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.438640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.448417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.448569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.448587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.448594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.448600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.448618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.458410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.458568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.458585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.458597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.458604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.458621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.468438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.468595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.468613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.468621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.468627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.468645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.478418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.478614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.478631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.478639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.478646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.478663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.488518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.488665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.488683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.488690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.488697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.488714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.498575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.498725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.498743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.498751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.498758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.498775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.508608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.508799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.508816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.130 [2024-07-26 14:08:23.508823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.130 [2024-07-26 14:08:23.508829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.130 [2024-07-26 14:08:23.508847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.130 qpair failed and we were unable to recover it. 00:26:56.130 [2024-07-26 14:08:23.518627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.130 [2024-07-26 14:08:23.518779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.130 [2024-07-26 14:08:23.518797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.131 [2024-07-26 14:08:23.518804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.131 [2024-07-26 14:08:23.518810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.131 [2024-07-26 14:08:23.518828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.131 qpair failed and we were unable to recover it. 00:26:56.131 [2024-07-26 14:08:23.528671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.131 [2024-07-26 14:08:23.528842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.131 [2024-07-26 14:08:23.528859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.131 [2024-07-26 14:08:23.528867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.131 [2024-07-26 14:08:23.528873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.131 [2024-07-26 14:08:23.528890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.131 qpair failed and we were unable to recover it. 00:26:56.131 [2024-07-26 14:08:23.538671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.131 [2024-07-26 14:08:23.538820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.131 [2024-07-26 14:08:23.538838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.131 [2024-07-26 14:08:23.538845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.131 [2024-07-26 14:08:23.538851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.131 [2024-07-26 14:08:23.538869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.131 qpair failed and we were unable to recover it. 00:26:56.131 [2024-07-26 14:08:23.548715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.131 [2024-07-26 14:08:23.548867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.131 [2024-07-26 14:08:23.548886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.131 [2024-07-26 14:08:23.548898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.131 [2024-07-26 14:08:23.548905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.131 [2024-07-26 14:08:23.548923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.131 qpair failed and we were unable to recover it. 00:26:56.131 [2024-07-26 14:08:23.558700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.131 [2024-07-26 14:08:23.558851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.131 [2024-07-26 14:08:23.558869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.131 [2024-07-26 14:08:23.558876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.131 [2024-07-26 14:08:23.558883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.131 [2024-07-26 14:08:23.558901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.131 qpair failed and we were unable to recover it. 00:26:56.391 [2024-07-26 14:08:23.568732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.391 [2024-07-26 14:08:23.568880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.391 [2024-07-26 14:08:23.568898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.391 [2024-07-26 14:08:23.568906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.391 [2024-07-26 14:08:23.568913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.391 [2024-07-26 14:08:23.568931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.391 qpair failed and we were unable to recover it. 00:26:56.391 [2024-07-26 14:08:23.578783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.391 [2024-07-26 14:08:23.578935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.391 [2024-07-26 14:08:23.578952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.391 [2024-07-26 14:08:23.578960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.391 [2024-07-26 14:08:23.578967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.391 [2024-07-26 14:08:23.578983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.391 qpair failed and we were unable to recover it. 00:26:56.391 [2024-07-26 14:08:23.588950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.391 [2024-07-26 14:08:23.589110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.391 [2024-07-26 14:08:23.589128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.391 [2024-07-26 14:08:23.589135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.391 [2024-07-26 14:08:23.589141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.589159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.598772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.598925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.598943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.598951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.598957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.598975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.608806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.608957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.608975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.608982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.608988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.609006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.618816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.619213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.619231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.619237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.619243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.619261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.628879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.629259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.629276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.629284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.629291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.629308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.638929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.639094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.639114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.639122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.639129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.639146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.648947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.649104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.649123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.649130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.649136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.649154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.658953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.659140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.659158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.659165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.659172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.659189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.669017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.669174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.669191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.669199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.669205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.669223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.679039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.679230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.679247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.679254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.679261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.679282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.689071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.689224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.689241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.689248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.689254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.689272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.699041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.699197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.699214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.699222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.699228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.699246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.709114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.709486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.709503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.709510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.392 [2024-07-26 14:08:23.709517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.392 [2024-07-26 14:08:23.709533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.392 qpair failed and we were unable to recover it. 00:26:56.392 [2024-07-26 14:08:23.719158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.392 [2024-07-26 14:08:23.719311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.392 [2024-07-26 14:08:23.719328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.392 [2024-07-26 14:08:23.719335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.719341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.393 [2024-07-26 14:08:23.719359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.729127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.729284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.729305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.729313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.729318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.393 [2024-07-26 14:08:23.729335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.739238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.739390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.739407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.739416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.739422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.393 [2024-07-26 14:08:23.739440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.749185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.749338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.749356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.749363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.749370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:26:56.393 [2024-07-26 14:08:23.749387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.759246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.759403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.759427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.759435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.759443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.759463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.769295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.769451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.769470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.769478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.769487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.769506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.779304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.779460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.779477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.779484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.779490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.779508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.789387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.789538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.789556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.789564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.789570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.789588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.799404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.799558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.799577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.799584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.799591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.799608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.809453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.809607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.809624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.809632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.809638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.809656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.393 [2024-07-26 14:08:23.819426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.393 [2024-07-26 14:08:23.819607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.393 [2024-07-26 14:08:23.819625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.393 [2024-07-26 14:08:23.819633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.393 [2024-07-26 14:08:23.819639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.393 [2024-07-26 14:08:23.819656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.393 qpair failed and we were unable to recover it. 00:26:56.654 [2024-07-26 14:08:23.829438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.654 [2024-07-26 14:08:23.829593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.654 [2024-07-26 14:08:23.829612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.654 [2024-07-26 14:08:23.829620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.654 [2024-07-26 14:08:23.829627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.654 [2024-07-26 14:08:23.829644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.654 qpair failed and we were unable to recover it. 00:26:56.654 [2024-07-26 14:08:23.839538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.654 [2024-07-26 14:08:23.839686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.654 [2024-07-26 14:08:23.839705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.654 [2024-07-26 14:08:23.839712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.654 [2024-07-26 14:08:23.839719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.654 [2024-07-26 14:08:23.839736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.654 qpair failed and we were unable to recover it. 00:26:56.654 [2024-07-26 14:08:23.849509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.654 [2024-07-26 14:08:23.849669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.654 [2024-07-26 14:08:23.849688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.654 [2024-07-26 14:08:23.849696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.654 [2024-07-26 14:08:23.849702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.654 [2024-07-26 14:08:23.849719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.654 qpair failed and we were unable to recover it. 00:26:56.654 [2024-07-26 14:08:23.859533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.654 [2024-07-26 14:08:23.859686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.654 [2024-07-26 14:08:23.859705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.654 [2024-07-26 14:08:23.859712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.654 [2024-07-26 14:08:23.859722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.654 [2024-07-26 14:08:23.859740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.869604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.869757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.869775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.869783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.869789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.869806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.879640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.879791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.879809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.879816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.879822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.879840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.889634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.889792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.889811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.889818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.889824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.889842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.899648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.899801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.899819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.899826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.899832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.899850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.909683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.909880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.909898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.909906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.909912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.909930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.919707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.919891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.919910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.919918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.919925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.919942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.929794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.929944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.929963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.929971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.929977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.929995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.939821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.939974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.939992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.940000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.940006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.940024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.949863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.950010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.950028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.950041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.950056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.950078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.959813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.959966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.959984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.959992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.959998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.960015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.969846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.970018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.970035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.970047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.970054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.970073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.979933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.980095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.980113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.980120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.980126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.980143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.989969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.655 [2024-07-26 14:08:23.990128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.655 [2024-07-26 14:08:23.990145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.655 [2024-07-26 14:08:23.990153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.655 [2024-07-26 14:08:23.990159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.655 [2024-07-26 14:08:23.990176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.655 qpair failed and we were unable to recover it. 00:26:56.655 [2024-07-26 14:08:23.999999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.000159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.000177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.000185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.000191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.000209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.010010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.010164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.010182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.010190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.010196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.010213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.020072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.020248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.020265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.020273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.020279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.020296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.030099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.030264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.030282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.030289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.030295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.030312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.040127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.040275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.040295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.040303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.040309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.040327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.050087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.050241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.050259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.050267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.050273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.050291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.060129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.060282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.060300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.060308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.060314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.060331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.070198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.070352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.070370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.070377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.070383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.070401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.656 [2024-07-26 14:08:24.080234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.656 [2024-07-26 14:08:24.080378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.656 [2024-07-26 14:08:24.080396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.656 [2024-07-26 14:08:24.080404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.656 [2024-07-26 14:08:24.080410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.656 [2024-07-26 14:08:24.080431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.656 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.090276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-07-26 14:08:24.090431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-07-26 14:08:24.090449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-07-26 14:08:24.090457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-07-26 14:08:24.090463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.918 [2024-07-26 14:08:24.090480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.100284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-07-26 14:08:24.100435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-07-26 14:08:24.100453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-07-26 14:08:24.100460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-07-26 14:08:24.100466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.918 [2024-07-26 14:08:24.100484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.110337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-07-26 14:08:24.110488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-07-26 14:08:24.110506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-07-26 14:08:24.110513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-07-26 14:08:24.110519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.918 [2024-07-26 14:08:24.110536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.120369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-07-26 14:08:24.120522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-07-26 14:08:24.120539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-07-26 14:08:24.120547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-07-26 14:08:24.120554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.918 [2024-07-26 14:08:24.120571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.130438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-07-26 14:08:24.130603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-07-26 14:08:24.130625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-07-26 14:08:24.130632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-07-26 14:08:24.130638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.918 [2024-07-26 14:08:24.130655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.140405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-07-26 14:08:24.140554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-07-26 14:08:24.140572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-07-26 14:08:24.140579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-07-26 14:08:24.140585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.918 [2024-07-26 14:08:24.140603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-07-26 14:08:24.150458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.150608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.150626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.150633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.150639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.150656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.160482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.160633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.160651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.160658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.160664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.160682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.170526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.170676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.170693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.170701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.170711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.170730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.180519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.180668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.180686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.180694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.180700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.180717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.190577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.190729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.190746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.190754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.190760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.190777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.200588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.200736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.200754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.200761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.200767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.200785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.210636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.210786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.210804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.210811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.210818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.210835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.220632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.220788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.220806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.220813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.220819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.220836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.230690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.230844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.230862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.230870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.230876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.230893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.240712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.240860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.240878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.240885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.240891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.240908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.250655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.250804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.250822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.250831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.250838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.250857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.260738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.260887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.260905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.260913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.260925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.260942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.270720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-07-26 14:08:24.270881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-07-26 14:08:24.270900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-07-26 14:08:24.270907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-07-26 14:08:24.270913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.919 [2024-07-26 14:08:24.270932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-07-26 14:08:24.280811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.280960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.280977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.280985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.280991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.281009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.290824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.290978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.290995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.291003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.291010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.291027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.300850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.301002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.301019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.301026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.301032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.301055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.310929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.311107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.311125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.311132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.311139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.311156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.320925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.321085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.321102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.321109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.321116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.321134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.330877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.331026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.331052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.331060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.331066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.331084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.340973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.341131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.341149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.341157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.341163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.341180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-07-26 14:08:24.351015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-07-26 14:08:24.351173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-07-26 14:08:24.351191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-07-26 14:08:24.351202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-07-26 14:08:24.351208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:56.920 [2024-07-26 14:08:24.351227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:57.183 [2024-07-26 14:08:24.361055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-07-26 14:08:24.361222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-07-26 14:08:24.361240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-07-26 14:08:24.361248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-07-26 14:08:24.361254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.183 [2024-07-26 14:08:24.361272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-07-26 14:08:24.371065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-07-26 14:08:24.371217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-07-26 14:08:24.371234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-07-26 14:08:24.371242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-07-26 14:08:24.371248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.183 [2024-07-26 14:08:24.371265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-07-26 14:08:24.381090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.381247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.381265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.381273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.381279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.381296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.391055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.391209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.391227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.391235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.391241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.391259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.401152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.401302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.401320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.401327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.401334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.401351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.411157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.411306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.411324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.411331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.411338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.411355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.421193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.421344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.421362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.421369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.421375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.421392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.431243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.431414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.431433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.431440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.431446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.431464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.441274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.441421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.441442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.441450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.441456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.441473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.451297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.451443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.451461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.451468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.451475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.451492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.461292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.461453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.461470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.461478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.461484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.461500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.471343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.471495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.471513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.471520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.471526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.471543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.481369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.481520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.481537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.481544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.481550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.481571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.491394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.491542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.491560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.491568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.491574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.491591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.501404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.501556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.501574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-07-26 14:08:24.501581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-07-26 14:08:24.501587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.184 [2024-07-26 14:08:24.501604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-07-26 14:08:24.511453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-07-26 14:08:24.511608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-07-26 14:08:24.511625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.511632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.511638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.511655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.521525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.521697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.521715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.521722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.521729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.521745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.531517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.531666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.531687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.531694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.531700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.531717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.541548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.541734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.541752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.541759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.541766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.541783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.551571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.551725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.551743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.551751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.551757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.551774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.561532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.561684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.561701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.561709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.561715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.561732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.571838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.571991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.572009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.572016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.572022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.572052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.581633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.581783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.581801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.581809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.581815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.581832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.591691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.591846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.591864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.591871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.591877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.591894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.601713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.601882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.601900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.601907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.601913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.601931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.185 [2024-07-26 14:08:24.611673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.185 [2024-07-26 14:08:24.611864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.185 [2024-07-26 14:08:24.611882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.185 [2024-07-26 14:08:24.611889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.185 [2024-07-26 14:08:24.611895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.185 [2024-07-26 14:08:24.611912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.185 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.621755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.621910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.621928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.621936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.621942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.621959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.631800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.631953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.631970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.631977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.631983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.632000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.641874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.642053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.642071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.642078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.642085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.642102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.651872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.652026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.652052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.652060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.652066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.652084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.661876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.662029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.662053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.662061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.662070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.662088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.671922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.672077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.672094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.672101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.672107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.672125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.681948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.682103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.682121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.682128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.682135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.682152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.691941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.446 [2024-07-26 14:08:24.692097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.446 [2024-07-26 14:08:24.692115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.446 [2024-07-26 14:08:24.692123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.446 [2024-07-26 14:08:24.692129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.446 [2024-07-26 14:08:24.692146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.446 qpair failed and we were unable to recover it. 00:26:57.446 [2024-07-26 14:08:24.701982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.702139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.702156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.702164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.702170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.702188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.712023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.712181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.712199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.712206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.712212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.712230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.722000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.722166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.722185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.722193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.722199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.722216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.732099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.732252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.732270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.732278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.732284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.732301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.742108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.742300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.742318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.742325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.742332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.742349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.752139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.752295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.752313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.752328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.752334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.752353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.762183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.762336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.762354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.762361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.762368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.762385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.772231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.772388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.772406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.772414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.772420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.772437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.782246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.782433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.782450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.782458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.782464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.782481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.792277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.792427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.792445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.792453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.792460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.792477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.802292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.802447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.802465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.802472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.802478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.802496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.812330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.812485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.447 [2024-07-26 14:08:24.812503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.447 [2024-07-26 14:08:24.812511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.447 [2024-07-26 14:08:24.812517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.447 [2024-07-26 14:08:24.812534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.447 qpair failed and we were unable to recover it. 00:26:57.447 [2024-07-26 14:08:24.822382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.447 [2024-07-26 14:08:24.822531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-07-26 14:08:24.822549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-07-26 14:08:24.822556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-07-26 14:08:24.822562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.448 [2024-07-26 14:08:24.822579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-07-26 14:08:24.832363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-07-26 14:08:24.832524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-07-26 14:08:24.832541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-07-26 14:08:24.832548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-07-26 14:08:24.832554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.448 [2024-07-26 14:08:24.832572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-07-26 14:08:24.842412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-07-26 14:08:24.842558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-07-26 14:08:24.842576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-07-26 14:08:24.842586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-07-26 14:08:24.842592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.448 [2024-07-26 14:08:24.842610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-07-26 14:08:24.852438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-07-26 14:08:24.852814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-07-26 14:08:24.852832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-07-26 14:08:24.852839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-07-26 14:08:24.852846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.448 [2024-07-26 14:08:24.852862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-07-26 14:08:24.862451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-07-26 14:08:24.862605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-07-26 14:08:24.862624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-07-26 14:08:24.862631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-07-26 14:08:24.862637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.448 [2024-07-26 14:08:24.862655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-07-26 14:08:24.872503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-07-26 14:08:24.872657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-07-26 14:08:24.872674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-07-26 14:08:24.872682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-07-26 14:08:24.872689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.448 [2024-07-26 14:08:24.872705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.882545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.882691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.882708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.882716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.882722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.882740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.892569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.892725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.892743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.892750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.892757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.892773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.902526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.902675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.902694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.902701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.902710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.902728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.912603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.912759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.912778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.912785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.912792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.912809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.922700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.922889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.922907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.922915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.922922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.922939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.932652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.932809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.932833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.932841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.932847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.932865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.942687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.942842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.942860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.942867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.942873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.942890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.952711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.952868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.952886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.952893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.952899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.952916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.962844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.963005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.963023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.963030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.963037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.963063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.972753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.972902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.972920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.972927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.972933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.972955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.982845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.983000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.709 [2024-07-26 14:08:24.983018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.709 [2024-07-26 14:08:24.983025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.709 [2024-07-26 14:08:24.983031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.709 [2024-07-26 14:08:24.983057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.709 qpair failed and we were unable to recover it. 00:26:57.709 [2024-07-26 14:08:24.992813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.709 [2024-07-26 14:08:24.992969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:24.992987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:24.992994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:24.993000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:24.993017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.002812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.002964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.002981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.002989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.002995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.003012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.012895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.013056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.013075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.013082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.013088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.013106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.022936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.023104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.023124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.023132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.023138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.023155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.032966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.033125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.033143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.033151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.033157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.033174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.043001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.043162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.043181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.043188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.043195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.043213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.053039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.053206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.053224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.053232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.053238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.053255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.063018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.063180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.063198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.063205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.063215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.063234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.073076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.073234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.073252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.073260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.073266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.073283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.083078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.083231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.083249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.083256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.083263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.083280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.093076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.093234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.093252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.093259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.093265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.093282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.103139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.103326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.103344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.103351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.103357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.103376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.113117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.113277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.113295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.113302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.710 [2024-07-26 14:08:25.113308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.710 [2024-07-26 14:08:25.113325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.710 qpair failed and we were unable to recover it. 00:26:57.710 [2024-07-26 14:08:25.123429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.710 [2024-07-26 14:08:25.123586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.710 [2024-07-26 14:08:25.123603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.710 [2024-07-26 14:08:25.123611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.711 [2024-07-26 14:08:25.123617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.711 [2024-07-26 14:08:25.123634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.711 qpair failed and we were unable to recover it. 00:26:57.711 [2024-07-26 14:08:25.133243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.711 [2024-07-26 14:08:25.133394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.711 [2024-07-26 14:08:25.133412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.711 [2024-07-26 14:08:25.133420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.711 [2024-07-26 14:08:25.133426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.711 [2024-07-26 14:08:25.133443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.711 qpair failed and we were unable to recover it. 00:26:57.711 [2024-07-26 14:08:25.143258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.143408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.143428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.143436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.143442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.143460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.153310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.153480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.153499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.153510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.153516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.153534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.163331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.163482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.163500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.163507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.163513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.163531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.173349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.173499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.173517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.173526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.173532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.173549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.183333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.183486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.183504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.183511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.183517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.183535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.193426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.193582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.193599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.193607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.193613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.193630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.203431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.203591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.203610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.203618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.203624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.203641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.213460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.972 [2024-07-26 14:08:25.213649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.972 [2024-07-26 14:08:25.213667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.972 [2024-07-26 14:08:25.213674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.972 [2024-07-26 14:08:25.213680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.972 [2024-07-26 14:08:25.213698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.972 qpair failed and we were unable to recover it. 00:26:57.972 [2024-07-26 14:08:25.223498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.223648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.223666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.223674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.223680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.223697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.233470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.233669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.233686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.233694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.233700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.233717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.243568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.243721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.243739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.243750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.243756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.243774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.253529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.253681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.253699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.253707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.253713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.253730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.263602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.263754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.263772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.263779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.263785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.263802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.273659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.273846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.273864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.273872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.273878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.273895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.283673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.283818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.283837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.283845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.283852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.283872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.293726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.293881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.293899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.293907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.293913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.293930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.303723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.303913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.303931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.303938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.303944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.303961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.313797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.313963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.313981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.313989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.313995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.314011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.323804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.323951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.323969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.323976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.323982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.323999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.333843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.333993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.334015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.334022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.334028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.334052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.343855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.344008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.973 [2024-07-26 14:08:25.344026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.973 [2024-07-26 14:08:25.344034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.973 [2024-07-26 14:08:25.344040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.973 [2024-07-26 14:08:25.344064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.973 qpair failed and we were unable to recover it. 00:26:57.973 [2024-07-26 14:08:25.353928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.973 [2024-07-26 14:08:25.354110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.974 [2024-07-26 14:08:25.354128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.974 [2024-07-26 14:08:25.354136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.974 [2024-07-26 14:08:25.354142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.974 [2024-07-26 14:08:25.354160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.974 qpair failed and we were unable to recover it. 00:26:57.974 [2024-07-26 14:08:25.363913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.974 [2024-07-26 14:08:25.364068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.974 [2024-07-26 14:08:25.364086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.974 [2024-07-26 14:08:25.364094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.974 [2024-07-26 14:08:25.364100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.974 [2024-07-26 14:08:25.364117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.974 qpair failed and we were unable to recover it. 00:26:57.974 [2024-07-26 14:08:25.373949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.974 [2024-07-26 14:08:25.374127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.974 [2024-07-26 14:08:25.374144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.974 [2024-07-26 14:08:25.374153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.974 [2024-07-26 14:08:25.374159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.974 [2024-07-26 14:08:25.374181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.974 qpair failed and we were unable to recover it. 00:26:57.974 [2024-07-26 14:08:25.383962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.974 [2024-07-26 14:08:25.384129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.974 [2024-07-26 14:08:25.384147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.974 [2024-07-26 14:08:25.384154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.974 [2024-07-26 14:08:25.384161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.974 [2024-07-26 14:08:25.384178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.974 qpair failed and we were unable to recover it. 00:26:57.974 [2024-07-26 14:08:25.394003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.974 [2024-07-26 14:08:25.394164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.974 [2024-07-26 14:08:25.394182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.974 [2024-07-26 14:08:25.394189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.974 [2024-07-26 14:08:25.394195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.974 [2024-07-26 14:08:25.394213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.974 qpair failed and we were unable to recover it. 00:26:57.974 [2024-07-26 14:08:25.403987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.974 [2024-07-26 14:08:25.404145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.974 [2024-07-26 14:08:25.404163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.974 [2024-07-26 14:08:25.404171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.974 [2024-07-26 14:08:25.404177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:57.974 [2024-07-26 14:08:25.404194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.974 qpair failed and we were unable to recover it. 00:26:58.235 [2024-07-26 14:08:25.414060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.235 [2024-07-26 14:08:25.414212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.235 [2024-07-26 14:08:25.414231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.235 [2024-07-26 14:08:25.414238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.235 [2024-07-26 14:08:25.414245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.235 [2024-07-26 14:08:25.414262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.235 qpair failed and we were unable to recover it. 00:26:58.235 [2024-07-26 14:08:25.424075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.235 [2024-07-26 14:08:25.424230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.235 [2024-07-26 14:08:25.424251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.235 [2024-07-26 14:08:25.424258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.235 [2024-07-26 14:08:25.424265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.235 [2024-07-26 14:08:25.424282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.235 qpair failed and we were unable to recover it. 00:26:58.235 [2024-07-26 14:08:25.434113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.235 [2024-07-26 14:08:25.434285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.235 [2024-07-26 14:08:25.434303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.235 [2024-07-26 14:08:25.434310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.235 [2024-07-26 14:08:25.434316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.235 [2024-07-26 14:08:25.434333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.235 qpair failed and we were unable to recover it. 00:26:58.235 [2024-07-26 14:08:25.444374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.235 [2024-07-26 14:08:25.444520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.235 [2024-07-26 14:08:25.444538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.235 [2024-07-26 14:08:25.444546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.235 [2024-07-26 14:08:25.444552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.235 [2024-07-26 14:08:25.444568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.235 qpair failed and we were unable to recover it. 00:26:58.235 [2024-07-26 14:08:25.454233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.235 [2024-07-26 14:08:25.454397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.235 [2024-07-26 14:08:25.454414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.235 [2024-07-26 14:08:25.454422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.454428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.454445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.464184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.464338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.464356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.464363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.464376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.464393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.474239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.474390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.474409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.474417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.474423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.474440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.484263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.484414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.484432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.484440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.484446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.484464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.494292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.494460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.494478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.494486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.494492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.494509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.504307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.504462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.504480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.504487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.504493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.504511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.514266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.514425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.514444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.514451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.514457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.514474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.524374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.524522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.524539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.524546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.524552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.524569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.534401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.534553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.534571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.534578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.534584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.534601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.544406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.544560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.544578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.544585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.544591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.544609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.554455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.554607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.554624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.554632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.554641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.554659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.564489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.564638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.564656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.564664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.564670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.564687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.574523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.574671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.574689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.574697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.236 [2024-07-26 14:08:25.574702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.236 [2024-07-26 14:08:25.574719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.236 qpair failed and we were unable to recover it. 00:26:58.236 [2024-07-26 14:08:25.584532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.236 [2024-07-26 14:08:25.584683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.236 [2024-07-26 14:08:25.584701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.236 [2024-07-26 14:08:25.584708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.584714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.584732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.594591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.594757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.594775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.594782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.594788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.594806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.604592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.604742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.604760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.604767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.604774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.604791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.614681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.614869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.614887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.614894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.614900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.614917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.624648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.624799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.624816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.624823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.624830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.624847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.634697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.634846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.634864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.634871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.634877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.634894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.644726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.644877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.644894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.644905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.644911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.644929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.654744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.654894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.654912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.654919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.654925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.654942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.237 [2024-07-26 14:08:25.664766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.237 [2024-07-26 14:08:25.664918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.237 [2024-07-26 14:08:25.664936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.237 [2024-07-26 14:08:25.664943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.237 [2024-07-26 14:08:25.664949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.237 [2024-07-26 14:08:25.664966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.237 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.674857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.675008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.675026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.675034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.675040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.675064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.684832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.684978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.684996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.685003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.685009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.685026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.694862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.695009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.695027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.695034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.695040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.695065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.704888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.705050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.705068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.705075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.705081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.705099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.714929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.715107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.715125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.715133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.715139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.715156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.724955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.725111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.725130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.725137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.725144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.725161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.734980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.735140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.735162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.735169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.735175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.735192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.744992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.745152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.745171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.745178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.745186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.745203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.755040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.755198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.755215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.755223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.755229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.755247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.765064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.765215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.765233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.765240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.765246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.765264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.775114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.775266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.775284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.775292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.775299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.775320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.785088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.785278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.785294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.785303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.785309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.785326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.795259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.795416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.795433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.795441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.795448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.795465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.805180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.805330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.805348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.805355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.805361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.805379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.815245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.815396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.815414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.815421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.815428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.815445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.825166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.825329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.825350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.825358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.825364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.825381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.835260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.835433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.835450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.835457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.835464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.835481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.845301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.845495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.845513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.845520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.845526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.845543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.855326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.855522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.855539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.855546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.855552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.855569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.865326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.865518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.865536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.865543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.865552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.865570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.875387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.875542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.875560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.875567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.875573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.875590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.885614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.885810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.885828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.885835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.885841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.885858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.895412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.895561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.895578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.498 [2024-07-26 14:08:25.895585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.498 [2024-07-26 14:08:25.895591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.498 [2024-07-26 14:08:25.895608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.498 qpair failed and we were unable to recover it. 00:26:58.498 [2024-07-26 14:08:25.905438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.498 [2024-07-26 14:08:25.905589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.498 [2024-07-26 14:08:25.905606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.499 [2024-07-26 14:08:25.905614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.499 [2024-07-26 14:08:25.905620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.499 [2024-07-26 14:08:25.905637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.499 qpair failed and we were unable to recover it. 00:26:58.499 [2024-07-26 14:08:25.915493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.499 [2024-07-26 14:08:25.915648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.499 [2024-07-26 14:08:25.915665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.499 [2024-07-26 14:08:25.915672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.499 [2024-07-26 14:08:25.915678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.499 [2024-07-26 14:08:25.915696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.499 qpair failed and we were unable to recover it. 00:26:58.499 [2024-07-26 14:08:25.925445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.499 [2024-07-26 14:08:25.925646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.499 [2024-07-26 14:08:25.925664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.499 [2024-07-26 14:08:25.925671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.499 [2024-07-26 14:08:25.925678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.499 [2024-07-26 14:08:25.925696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.499 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.935537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.935689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.758 [2024-07-26 14:08:25.935707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.758 [2024-07-26 14:08:25.935715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.758 [2024-07-26 14:08:25.935722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.758 [2024-07-26 14:08:25.935739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.945559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.945709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.758 [2024-07-26 14:08:25.945728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.758 [2024-07-26 14:08:25.945735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.758 [2024-07-26 14:08:25.945741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.758 [2024-07-26 14:08:25.945757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.955645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.955798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.758 [2024-07-26 14:08:25.955816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.758 [2024-07-26 14:08:25.955823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.758 [2024-07-26 14:08:25.955833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.758 [2024-07-26 14:08:25.955851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.965623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.965773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.758 [2024-07-26 14:08:25.965790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.758 [2024-07-26 14:08:25.965798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.758 [2024-07-26 14:08:25.965804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.758 [2024-07-26 14:08:25.965821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.975585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.975738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.758 [2024-07-26 14:08:25.975755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.758 [2024-07-26 14:08:25.975763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.758 [2024-07-26 14:08:25.975769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.758 [2024-07-26 14:08:25.975786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.985717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.985880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.758 [2024-07-26 14:08:25.985898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.758 [2024-07-26 14:08:25.985905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.758 [2024-07-26 14:08:25.985912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.758 [2024-07-26 14:08:25.985929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-26 14:08:25.995739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.758 [2024-07-26 14:08:25.995927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:25.995944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:25.995952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:25.995958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:25.995975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.005747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.005895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.005914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.005921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.005928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.005945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.015815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.015965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.015983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.015991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.015997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.016014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.025794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.025949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.025966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.025974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.025980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.025997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.035828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.036012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.036029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.036037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.036049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.036066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.045789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.045939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.045956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.045967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.045973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.045992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.055877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.056028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.056051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.056060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.056066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.056083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.065910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.066074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.066091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.066099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.066105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.066122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.075957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.076137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.076154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.076163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.076169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.076186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.085980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.086132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.086151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.086159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.086165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:26:58.759 [2024-07-26 14:08:26.086183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.096029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.096241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.096271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.096283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.096293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.096318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.105977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.106138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.106157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.106165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.106171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.106189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.116091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.116244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.116263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.116271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.116277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.116295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.126137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.126287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.126306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.126314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.126320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.126337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.136142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.136294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.136313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.136323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.136329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.136347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.146103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.146291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.146310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.146317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.146323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.146341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.156198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.156348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.156366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.156373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.759 [2024-07-26 14:08:26.156380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.759 [2024-07-26 14:08:26.156398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.759 qpair failed and we were unable to recover it. 00:26:58.759 [2024-07-26 14:08:26.166226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.759 [2024-07-26 14:08:26.166376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.759 [2024-07-26 14:08:26.166395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.759 [2024-07-26 14:08:26.166403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.760 [2024-07-26 14:08:26.166409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.760 [2024-07-26 14:08:26.166426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.760 qpair failed and we were unable to recover it. 00:26:58.760 [2024-07-26 14:08:26.176269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.760 [2024-07-26 14:08:26.176424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.760 [2024-07-26 14:08:26.176443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.760 [2024-07-26 14:08:26.176450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.760 [2024-07-26 14:08:26.176456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.760 [2024-07-26 14:08:26.176474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.760 qpair failed and we were unable to recover it. 00:26:58.760 [2024-07-26 14:08:26.186283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.760 [2024-07-26 14:08:26.186441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.760 [2024-07-26 14:08:26.186460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.760 [2024-07-26 14:08:26.186467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.760 [2024-07-26 14:08:26.186473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:58.760 [2024-07-26 14:08:26.186490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.760 qpair failed and we were unable to recover it. 00:26:59.020 [2024-07-26 14:08:26.196324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.020 [2024-07-26 14:08:26.196543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.020 [2024-07-26 14:08:26.196565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.020 [2024-07-26 14:08:26.196574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.020 [2024-07-26 14:08:26.196580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.020 [2024-07-26 14:08:26.196598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.020 qpair failed and we were unable to recover it. 00:26:59.020 [2024-07-26 14:08:26.206309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.020 [2024-07-26 14:08:26.206504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.020 [2024-07-26 14:08:26.206524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.020 [2024-07-26 14:08:26.206531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.020 [2024-07-26 14:08:26.206537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.020 [2024-07-26 14:08:26.206555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.020 qpair failed and we were unable to recover it. 00:26:59.020 [2024-07-26 14:08:26.216362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.020 [2024-07-26 14:08:26.216512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.020 [2024-07-26 14:08:26.216531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.020 [2024-07-26 14:08:26.216538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.020 [2024-07-26 14:08:26.216544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.020 [2024-07-26 14:08:26.216562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.020 qpair failed and we were unable to recover it. 00:26:59.020 [2024-07-26 14:08:26.226405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.020 [2024-07-26 14:08:26.226573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.020 [2024-07-26 14:08:26.226595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.020 [2024-07-26 14:08:26.226603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.226609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.226626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.236421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.236576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.236594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.236602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.236608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.236625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.246443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.246595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.246613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.246621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.246627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.246645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.256477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.256626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.256644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.256652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.256658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.256675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.266501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.266655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.266674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.266681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.266687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.266709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.276536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.276691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.276709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.276717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.276722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.276739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.286541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.286695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.286714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.286722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.286729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.286745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.296572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.296766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.296785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.296792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.296798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.296817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.306605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.306758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.306776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.306784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.306790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.306807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.316631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.316784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.316806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.316814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.316820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.316838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.326598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.326979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.326998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.327004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.327011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.327027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.336652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.336806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.336824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.336832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.336838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.336855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.346686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.346839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.346858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.346865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.346871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.346888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.021 qpair failed and we were unable to recover it. 00:26:59.021 [2024-07-26 14:08:26.356913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.021 [2024-07-26 14:08:26.357082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.021 [2024-07-26 14:08:26.357103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.021 [2024-07-26 14:08:26.357111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.021 [2024-07-26 14:08:26.357117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.021 [2024-07-26 14:08:26.357139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.366716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.366870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.366889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.366896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.366903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.366920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.376831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.376985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.377003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.377010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.377017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.377034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.386853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.387026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.387052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.387060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.387066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.387084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.396877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.397027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.397050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.397058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.397064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.397082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.406910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.407068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.407092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.407100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.407106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.407124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.416928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.417092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.417111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.417119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.417124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.417142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.426957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.427115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.427134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.427142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.427148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.427165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.436927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.437091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.437110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.437117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.437124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.437142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.022 [2024-07-26 14:08:26.447180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.022 [2024-07-26 14:08:26.447335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.022 [2024-07-26 14:08:26.447354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.022 [2024-07-26 14:08:26.447362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.022 [2024-07-26 14:08:26.447369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.022 [2024-07-26 14:08:26.447390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.022 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.457036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.457193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.457214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.457222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.457230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.457249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.467014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.467179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.467199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.467207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.467213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.467229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.477109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.477262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.477281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.477288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.477295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.477313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.487116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.487271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.487290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.487298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.487304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.487321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.497195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.497350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.497372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.497380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.497386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.497403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.507226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.507382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.507401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.507408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.507415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.507432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.517160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.517315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.517334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.517341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.517347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.517364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.527275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.527470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.527489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.527496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.527502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.527518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.537273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.537426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.537445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.537451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.537461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.537479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.547321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.286 [2024-07-26 14:08:26.547475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.286 [2024-07-26 14:08:26.547493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.286 [2024-07-26 14:08:26.547501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.286 [2024-07-26 14:08:26.547507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.286 [2024-07-26 14:08:26.547524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.286 qpair failed and we were unable to recover it. 00:26:59.286 [2024-07-26 14:08:26.557364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.557517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.557535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.557543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.557549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.557566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.567360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.567512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.567530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.567537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.567543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.567561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.577328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.577495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.577513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.577521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.577527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.577544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.587443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.587605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.587624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.587631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.587637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.587655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.597384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.597540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.597560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.597567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.597573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.597590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.607432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.607582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.607601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.607609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.607615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.607633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.617442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.617596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.617614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.617621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.617628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.617646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.627464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.627618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.627637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.627644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.627658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.627675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.637559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.637715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.637734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.637741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.637747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.637765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.647527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.647676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.647695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.647702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.647708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.647725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.657646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.657818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.657836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.657844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.657850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.657867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.287 [2024-07-26 14:08:26.667820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.287 [2024-07-26 14:08:26.667976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.287 [2024-07-26 14:08:26.667994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.287 [2024-07-26 14:08:26.668002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.287 [2024-07-26 14:08:26.668008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.287 [2024-07-26 14:08:26.668025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.287 qpair failed and we were unable to recover it. 00:26:59.288 [2024-07-26 14:08:26.677677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.288 [2024-07-26 14:08:26.677839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.288 [2024-07-26 14:08:26.677858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.288 [2024-07-26 14:08:26.677865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.288 [2024-07-26 14:08:26.677871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.288 [2024-07-26 14:08:26.677888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-07-26 14:08:26.687689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.288 [2024-07-26 14:08:26.687842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.288 [2024-07-26 14:08:26.687861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.288 [2024-07-26 14:08:26.687868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.288 [2024-07-26 14:08:26.687875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.288 [2024-07-26 14:08:26.687892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-07-26 14:08:26.697671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.288 [2024-07-26 14:08:26.697855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.288 [2024-07-26 14:08:26.697874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.288 [2024-07-26 14:08:26.697881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.288 [2024-07-26 14:08:26.697888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.288 [2024-07-26 14:08:26.697905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-07-26 14:08:26.707752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.288 [2024-07-26 14:08:26.707942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.288 [2024-07-26 14:08:26.707961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.288 [2024-07-26 14:08:26.707968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.288 [2024-07-26 14:08:26.707974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.288 [2024-07-26 14:08:26.707991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.288 [2024-07-26 14:08:26.717719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.288 [2024-07-26 14:08:26.717879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.288 [2024-07-26 14:08:26.717897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.288 [2024-07-26 14:08:26.717908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.288 [2024-07-26 14:08:26.717914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.288 [2024-07-26 14:08:26.717933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.288 qpair failed and we were unable to recover it. 00:26:59.578 [2024-07-26 14:08:26.727808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.578 [2024-07-26 14:08:26.728011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.578 [2024-07-26 14:08:26.728031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.578 [2024-07-26 14:08:26.728038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.578 [2024-07-26 14:08:26.728051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.578 [2024-07-26 14:08:26.728069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.578 qpair failed and we were unable to recover it. 00:26:59.578 [2024-07-26 14:08:26.737834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.578 [2024-07-26 14:08:26.737984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.578 [2024-07-26 14:08:26.738005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.578 [2024-07-26 14:08:26.738012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.578 [2024-07-26 14:08:26.738019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.578 [2024-07-26 14:08:26.738036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.578 qpair failed and we were unable to recover it. 00:26:59.578 [2024-07-26 14:08:26.747813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.578 [2024-07-26 14:08:26.747972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.578 [2024-07-26 14:08:26.747990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.578 [2024-07-26 14:08:26.747997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.578 [2024-07-26 14:08:26.748004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.578 [2024-07-26 14:08:26.748021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.578 qpair failed and we were unable to recover it. 00:26:59.578 [2024-07-26 14:08:26.757916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.578 [2024-07-26 14:08:26.758077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.578 [2024-07-26 14:08:26.758095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.758102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.758108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.758126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.767861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.768012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.768031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.768038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.768050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.768068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.777888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.778049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.778068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.778076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.778082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.778100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.787917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.788081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.788099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.788106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.788111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.788129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.797988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.798156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.798175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.798182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.798188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.798206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.807967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.808143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.808162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.808173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.808179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.808196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.818099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.818252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.818271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.818278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.818284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.818301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.828031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.828185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.828204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.828211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.828217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.828234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.838069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.838228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.838246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.838253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.838259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.838277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.848155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.848303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.848322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.848329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.848336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.848353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.858197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.858345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.858364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.858371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.858377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.858394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.868225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.868375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.868394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.868402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.868407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.868425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.878181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.878337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.878355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.878363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.878369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.579 [2024-07-26 14:08:26.878385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-26 14:08:26.888282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-26 14:08:26.888437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-26 14:08:26.888456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-26 14:08:26.888463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-26 14:08:26.888469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.888487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.898306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.898454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.898473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.898483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.898489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.898507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.908351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.908520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.908538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.908546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.908552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.908569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.918370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.918525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.918544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.918551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.918557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.918574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.928314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.928470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.928489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.928496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.928503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.928519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.938412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.938565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.938584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.938591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.938597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.938615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.948379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.948538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.948557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.948565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.948571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.948588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.958473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.958627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.958646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.958653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.958659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.958676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.968513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.968666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.968685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.968692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.968698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.968715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.978530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.978681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.978699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.978706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.978712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.978730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.988573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.988726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.988748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.988755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.988761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.988779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:26.998591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:26.998744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:26.998762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:26.998770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:26.998775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:26.998793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-26 14:08:27.008638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-26 14:08:27.008797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-26 14:08:27.008816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-26 14:08:27.008824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-26 14:08:27.008830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.580 [2024-07-26 14:08:27.008847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.018617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.018773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.018793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.018800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-26 14:08:27.018807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.842 [2024-07-26 14:08:27.018825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.028678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.028829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.028848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.028856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-26 14:08:27.028863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.842 [2024-07-26 14:08:27.028884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.038632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.038799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.038818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.038825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-26 14:08:27.038831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.842 [2024-07-26 14:08:27.038849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.048756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.048905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.048924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.048931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-26 14:08:27.048938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.842 [2024-07-26 14:08:27.048954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.058734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.058888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.058907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.058914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-26 14:08:27.058921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.842 [2024-07-26 14:08:27.058937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.068710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.068866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.068885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.068893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-26 14:08:27.068899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.842 [2024-07-26 14:08:27.068916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-26 14:08:27.078817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-26 14:08:27.078987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-26 14:08:27.079010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-26 14:08:27.079018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.079024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.079041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.088834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.088986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.089005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.089012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.089018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.089036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.098855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.099009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.099039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.099054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.099060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.099078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.108898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.109060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.109078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.109086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.109092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.109109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.118920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.119083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.119102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.119109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.119115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.119136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.128953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.129112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.129131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.129138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.129144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.129162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.138978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.139160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.139178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.139185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.139191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.139209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.149016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.149170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.149189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.149196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.149202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.149219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.159047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.159202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.159220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.159228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.159234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.159251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.169066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.169218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.169240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.169247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.169253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.169270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.179105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.179274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.179293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.179300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.179306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.179323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.189129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.189282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.189300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.189307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.189313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.189331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.199129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.199283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.199301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.199308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-26 14:08:27.199314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.843 [2024-07-26 14:08:27.199331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-26 14:08:27.209160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-26 14:08:27.209316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-26 14:08:27.209336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-26 14:08:27.209343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.209349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.209370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:26:59.844 [2024-07-26 14:08:27.219207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.844 [2024-07-26 14:08:27.219358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.844 [2024-07-26 14:08:27.219377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.844 [2024-07-26 14:08:27.219385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.219391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.219408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:26:59.844 [2024-07-26 14:08:27.229296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.844 [2024-07-26 14:08:27.229465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.844 [2024-07-26 14:08:27.229483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.844 [2024-07-26 14:08:27.229492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.229498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.229515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:26:59.844 [2024-07-26 14:08:27.239284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.844 [2024-07-26 14:08:27.239437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.844 [2024-07-26 14:08:27.239456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.844 [2024-07-26 14:08:27.239463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.239470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.239487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:26:59.844 [2024-07-26 14:08:27.249309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.844 [2024-07-26 14:08:27.249478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.844 [2024-07-26 14:08:27.249496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.844 [2024-07-26 14:08:27.249504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.249510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.249527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:26:59.844 [2024-07-26 14:08:27.259324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.844 [2024-07-26 14:08:27.259477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.844 [2024-07-26 14:08:27.259498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.844 [2024-07-26 14:08:27.259506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.259512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.259530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:26:59.844 [2024-07-26 14:08:27.269374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.844 [2024-07-26 14:08:27.269531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.844 [2024-07-26 14:08:27.269549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.844 [2024-07-26 14:08:27.269556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.844 [2024-07-26 14:08:27.269563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:26:59.844 [2024-07-26 14:08:27.269580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.844 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.279370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.279529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.279548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.279556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.279563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.279580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.289421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.289577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.289596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.289604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.289610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.289627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.299485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.299654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.299673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.299680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.299690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.299708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.309453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.309609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.309628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.309635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.309642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.309659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.319508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.319656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.319675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.319682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.319689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.319706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.329512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.329886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.329905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.329911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.329917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.329934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.339540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.339693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.339712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.339720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.339725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.339742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.349587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.349745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.349764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.349771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.349777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.349794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.359605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.359756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.359774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.359782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.359788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.359805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-26 14:08:27.369655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-26 14:08:27.369807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-26 14:08:27.369825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-26 14:08:27.369833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-26 14:08:27.369838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.105 [2024-07-26 14:08:27.369855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.379675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.379828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.379846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.379854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.379860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.379877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.389689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.389842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.389861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.389868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.389882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.389898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.399736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.399889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.399909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.399917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.399923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.399941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.409687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.409848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.409867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.409874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.409881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.409898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.419792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.419945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.419964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.419971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.419977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.419995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.429822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.429980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.429998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.430006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.430012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.430029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.439801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.439957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.439976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.439983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.439990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.440007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.449891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.450065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.450085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.450092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.450098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.450116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.460146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.460522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.460540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.460547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.460553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.460569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.469935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.470094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.470113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.470120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.470127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.470144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.480005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.480184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.480210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.480222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.480229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.480247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.490000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.490163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.490183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.490190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.490196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.490213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.500038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.500213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.500232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.500239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.500245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.500263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.510034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.510191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.510209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.510217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.510224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.510241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.520091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.520253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.520272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.520280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.520286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.520303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.106 [2024-07-26 14:08:27.530110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.106 [2024-07-26 14:08:27.530264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.106 [2024-07-26 14:08:27.530283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.106 [2024-07-26 14:08:27.530291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.106 [2024-07-26 14:08:27.530297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.106 [2024-07-26 14:08:27.530315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.106 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-26 14:08:27.540147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.540308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.540327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.540336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.540342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.540360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.550170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.550322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.550341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.550348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.550354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.550371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.560207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.560360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.560378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.560386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.560392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.560409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.570247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.570398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.570416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.570427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.570434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.570451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.580299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.580452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.580471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.580478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.580485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.580502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.590340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.590494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.590513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.590520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.590528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.590546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.600353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.600506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.600525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.600532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.600540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.600557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.610365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.610521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.610542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.610550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.610557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.610575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.620388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.620542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.620561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.620568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.620575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.620593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.630337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.630493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.630512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.630521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.630528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.630545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.640477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.640629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.640647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.640654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.640661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.640679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.650466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.650619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.650637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.650645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.650650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.650667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.660495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-26 14:08:27.660644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-26 14:08:27.660663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-26 14:08:27.660673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-26 14:08:27.660680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.368 [2024-07-26 14:08:27.660697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-26 14:08:27.670505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.670661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.670679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.670687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.670694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.670711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.680549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.680702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.680720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.680728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.680733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.680751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.690582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.690733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.690752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.690759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.690766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.690783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.700608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.700763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.700781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.700789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.700795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.700812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.710636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.710789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.710808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.710815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.710822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.710839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.720883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.721036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.721059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.721066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.721073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.721092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.730689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.730837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.730856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.730863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.730870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.730886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.740714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.740867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.740885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.740893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.740898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.740916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.750754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.750911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.750932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.750941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.750947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.750965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.760756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.760911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.760929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.760936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.760943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.760960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.770807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.770956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.770974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.770981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.770987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.771004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.780835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.781002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.781020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.781027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.781035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.781059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.790865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.791019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.791036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.369 [2024-07-26 14:08:27.791049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.369 [2024-07-26 14:08:27.791055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.369 [2024-07-26 14:08:27.791073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.369 qpair failed and we were unable to recover it. 00:27:00.369 [2024-07-26 14:08:27.800882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.369 [2024-07-26 14:08:27.801037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.369 [2024-07-26 14:08:27.801063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.370 [2024-07-26 14:08:27.801070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.370 [2024-07-26 14:08:27.801077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.370 [2024-07-26 14:08:27.801094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.370 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.810914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.811075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.811094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.811102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.811108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.811126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.820943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.821101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.821120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.821127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.821134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.821152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.830978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.831139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.831158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.831165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.831172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.831189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.841006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.841393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.841415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.841422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.841428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.841446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.851248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.851406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.851424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.851431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.851437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.851455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.861004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.861165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.861184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.861191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.861197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.861215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.871075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.871228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.871246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.871253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.871260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.871277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.881123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.881273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.881291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.881299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.881305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.881325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.891082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.891236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.891254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.891262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.891268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.891286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.901326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.901483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.901502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.901509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.901515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.901532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-26 14:08:27.911260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-26 14:08:27.911429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-26 14:08:27.911447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-26 14:08:27.911455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-26 14:08:27.911461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.630 [2024-07-26 14:08:27.911478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.921223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.921381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.921399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.921409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.921416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.631 [2024-07-26 14:08:27.921436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.931186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.931344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.931367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.931375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.931381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.631 [2024-07-26 14:08:27.931398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.941213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.941367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.941385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.941392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.941399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.631 [2024-07-26 14:08:27.941417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.951343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.951509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.951528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.951535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.951542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.631 [2024-07-26 14:08:27.951560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.961310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.961508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.961526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.961533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.961539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15f2f30 00:27:00.631 [2024-07-26 14:08:27.961556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.971404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.971604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.971632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.971643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.971652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:27:00.631 [2024-07-26 14:08:27.971680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.981344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.981497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.981516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.981524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.981531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c0000b90 00:27:00.631 [2024-07-26 14:08:27.981549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:27.981823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ff0 is same with the state(5) to be set 00:27:00.631 [2024-07-26 14:08:27.991436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:27.991628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:27.991656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:27.991668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:27.991677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7d0000b90 00:27:00.631 [2024-07-26 14:08:27.991702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:28.001441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:28.001598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:28.001617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:28.001626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:28.001632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7d0000b90 00:27:00.631 [2024-07-26 14:08:28.001651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:28.011463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:28.011617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:28.011639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:28.011648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:28.011655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:27:00.631 [2024-07-26 14:08:28.011675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:28.021511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-26 14:08:28.021669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-26 14:08:28.021689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-26 14:08:28.021696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-26 14:08:28.021702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:27:00.631 [2024-07-26 14:08:28.021721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-26 14:08:28.021996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1600ff0 (9): Bad file descriptor 00:27:00.631 Initializing NVMe Controllers 00:27:00.631 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:00.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:00.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:00.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:00.631 Initialization complete. Launching workers. 00:27:00.631 Starting thread on core 1 00:27:00.631 Starting thread on core 2 00:27:00.631 Starting thread on core 3 00:27:00.631 Starting thread on core 0 00:27:00.631 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:00.631 00:27:00.631 real 0m11.277s 00:27:00.631 user 0m20.335s 00:27:00.631 sys 0m4.379s 00:27:00.631 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.631 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.631 ************************************ 00:27:00.631 END TEST nvmf_target_disconnect_tc2 00:27:00.631 ************************************ 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.891 rmmod nvme_tcp 00:27:00.891 rmmod nvme_fabrics 00:27:00.891 rmmod nvme_keyring 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3117823 ']' 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3117823 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3117823 ']' 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3117823 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3117823 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3117823' 00:27:00.891 killing process with pid 3117823 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3117823 00:27:00.891 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3117823 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.150 14:08:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.060 14:08:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.060 00:27:03.060 real 0m19.264s 00:27:03.060 user 0m47.244s 00:27:03.060 sys 0m8.776s 00:27:03.060 14:08:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.060 14:08:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:03.060 ************************************ 00:27:03.060 END TEST nvmf_target_disconnect 00:27:03.060 ************************************ 00:27:03.060 14:08:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:03.319 00:27:03.319 real 5m47.782s 00:27:03.319 user 10m54.269s 00:27:03.319 sys 1m46.039s 00:27:03.319 14:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.319 14:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.319 ************************************ 00:27:03.319 END TEST nvmf_host 00:27:03.319 ************************************ 00:27:03.319 00:27:03.319 real 20m58.889s 00:27:03.319 user 45m11.328s 00:27:03.319 sys 6m16.281s 00:27:03.319 14:08:30 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.319 14:08:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.319 ************************************ 00:27:03.319 END TEST nvmf_tcp 00:27:03.319 ************************************ 00:27:03.319 14:08:30 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:27:03.319 14:08:30 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:03.319 14:08:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:03.319 14:08:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.319 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:27:03.319 ************************************ 00:27:03.319 START TEST spdkcli_nvmf_tcp 00:27:03.319 ************************************ 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:03.319 * Looking for test storage... 00:27:03.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.319 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3119354 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3119354 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3119354 ']' 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.320 14:08:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.320 [2024-07-26 14:08:30.753786] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:27:03.320 [2024-07-26 14:08:30.753839] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119354 ] 00:27:03.579 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.579 [2024-07-26 14:08:30.808129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:03.579 [2024-07-26 14:08:30.888355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.579 [2024-07-26 14:08:30.888358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.149 14:08:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.149 14:08:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:27:04.149 14:08:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:04.149 14:08:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.149 14:08:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.409 14:08:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:04.409 14:08:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:04.409 14:08:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:04.409 14:08:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.409 14:08:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.409 14:08:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:04.409 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:04.409 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:04.409 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:04.409 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:04.409 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:04.409 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:04.409 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.409 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.409 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:04.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:04.409 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:04.409 ' 00:27:06.947 [2024-07-26 14:08:33.968896] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.887 [2024-07-26 14:08:35.148846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:10.427 [2024-07-26 14:08:37.319492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:11.808 [2024-07-26 14:08:39.189337] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:13.189 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:13.189 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:13.189 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:13.189 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:13.189 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:13.189 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:13.189 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:13.189 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:13.189 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:13.189 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:13.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:13.189 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:13.448 14:08:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:13.708 14:08:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.968 14:08:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:13.968 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:13.968 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:13.968 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:13.968 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:13.968 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:13.968 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:13.968 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:13.968 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:13.968 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:13.968 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:13.968 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:13.968 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:13.968 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:13.968 ' 00:27:19.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:19.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:19.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:19.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:19.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:19.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:19.256 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:19.256 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:19.256 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:19.256 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:19.256 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:19.256 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:19.256 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:19.256 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3119354 ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3119354' 00:27:19.256 killing process with pid 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3119354 ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3119354 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3119354 ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3119354 00:27:19.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3119354) - No such process 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3119354 is not found' 00:27:19.256 Process with pid 3119354 is not found 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:19.256 00:27:19.256 real 0m15.854s 00:27:19.256 user 0m32.900s 00:27:19.256 sys 0m0.696s 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.256 14:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.256 ************************************ 00:27:19.256 END TEST spdkcli_nvmf_tcp 00:27:19.256 ************************************ 00:27:19.256 14:08:46 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:19.256 14:08:46 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:19.256 14:08:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.256 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:27:19.256 ************************************ 00:27:19.256 START TEST nvmf_identify_passthru 00:27:19.256 ************************************ 00:27:19.256 14:08:46 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:19.256 * Looking for test storage... 00:27:19.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:19.256 14:08:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.256 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.256 14:08:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.257 14:08:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.257 14:08:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.257 14:08:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.257 14:08:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.257 14:08:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.257 14:08:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:19.257 14:08:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.257 14:08:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.257 14:08:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:19.257 14:08:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.257 14:08:46 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.257 14:08:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.542 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.542 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.542 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.542 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.542 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.543 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:27:24.803 00:27:24.803 --- 10.0.0.2 ping statistics --- 00:27:24.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.803 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:27:24.803 00:27:24.803 --- 10.0.0.1 ping statistics --- 00:27:24.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.803 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.803 14:08:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.803 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.803 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:24.803 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:24.804 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:24.804 14:08:52 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:24.804 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:24.804 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:24.804 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:24.804 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:24.804 14:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:24.804 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.076 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:29.076 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:29.076 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:29.076 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:29.076 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3126399 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3126399 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3126399 ']' 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.290 14:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.290 14:09:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.290 [2024-07-26 14:09:00.447232] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:27:33.290 [2024-07-26 14:09:00.447278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.290 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.290 [2024-07-26 14:09:00.504317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.290 [2024-07-26 14:09:00.584468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.290 [2024-07-26 14:09:00.584509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.290 [2024-07-26 14:09:00.584517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.290 [2024-07-26 14:09:00.584523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.290 [2024-07-26 14:09:00.584529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.290 [2024-07-26 14:09:00.584569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.290 [2024-07-26 14:09:00.584817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.290 [2024-07-26 14:09:00.584835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.290 [2024-07-26 14:09:00.584836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:27:33.860 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.860 INFO: Log level set to 20 00:27:33.860 INFO: Requests: 00:27:33.860 { 00:27:33.860 "jsonrpc": "2.0", 00:27:33.860 "method": "nvmf_set_config", 00:27:33.860 "id": 1, 00:27:33.860 "params": { 00:27:33.860 "admin_cmd_passthru": { 00:27:33.860 "identify_ctrlr": true 00:27:33.860 } 00:27:33.860 } 00:27:33.860 } 00:27:33.860 00:27:33.860 INFO: response: 00:27:33.860 { 00:27:33.860 "jsonrpc": "2.0", 00:27:33.860 "id": 1, 00:27:33.860 "result": true 00:27:33.860 } 00:27:33.860 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.860 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.860 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.860 INFO: Setting log level to 20 00:27:33.860 INFO: Setting log level to 20 00:27:33.860 INFO: Log level set to 20 00:27:33.860 INFO: Log level set to 20 00:27:33.860 INFO: Requests: 00:27:33.860 { 00:27:33.860 "jsonrpc": "2.0", 00:27:33.860 "method": "framework_start_init", 00:27:33.860 "id": 1 00:27:33.860 } 00:27:33.860 00:27:33.860 INFO: Requests: 00:27:33.860 { 00:27:33.860 "jsonrpc": "2.0", 00:27:33.860 "method": "framework_start_init", 00:27:33.860 "id": 1 00:27:33.860 } 00:27:33.860 00:27:34.121 [2024-07-26 14:09:01.350891] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:34.121 INFO: response: 00:27:34.121 { 00:27:34.121 "jsonrpc": "2.0", 00:27:34.121 "id": 1, 00:27:34.121 "result": true 00:27:34.121 } 00:27:34.121 00:27:34.121 INFO: response: 00:27:34.121 { 00:27:34.121 "jsonrpc": "2.0", 00:27:34.121 "id": 1, 00:27:34.121 "result": true 00:27:34.121 } 00:27:34.121 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.121 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.121 INFO: Setting log level to 40 00:27:34.121 INFO: Setting log level to 40 00:27:34.121 INFO: Setting log level to 40 00:27:34.121 [2024-07-26 14:09:01.364241] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.121 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.121 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.121 14:09:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 Nvme0n1 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.416 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.416 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.416 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 [2024-07-26 14:09:04.272266] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.416 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 [ 00:27:37.416 { 00:27:37.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:37.416 "subtype": "Discovery", 00:27:37.416 "listen_addresses": [], 00:27:37.416 "allow_any_host": true, 00:27:37.416 "hosts": [] 00:27:37.416 }, 00:27:37.416 { 00:27:37.416 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.416 "subtype": "NVMe", 00:27:37.416 "listen_addresses": [ 00:27:37.416 { 00:27:37.416 "trtype": "TCP", 00:27:37.416 "adrfam": "IPv4", 00:27:37.416 "traddr": "10.0.0.2", 00:27:37.416 "trsvcid": "4420" 00:27:37.416 } 00:27:37.416 ], 00:27:37.416 "allow_any_host": true, 00:27:37.416 "hosts": [], 00:27:37.416 "serial_number": "SPDK00000000000001", 00:27:37.416 "model_number": "SPDK bdev Controller", 00:27:37.416 "max_namespaces": 1, 00:27:37.416 "min_cntlid": 1, 00:27:37.416 "max_cntlid": 65519, 00:27:37.416 "namespaces": [ 00:27:37.416 { 00:27:37.416 "nsid": 1, 00:27:37.416 "bdev_name": "Nvme0n1", 00:27:37.416 "name": "Nvme0n1", 00:27:37.416 "nguid": "A8E956A3E55249BE9473A41D58EA347A", 00:27:37.416 "uuid": "a8e956a3-e552-49be-9473-a41d58ea347a" 00:27:37.416 } 00:27:37.416 ] 00:27:37.416 } 00:27:37.416 ] 00:27:37.416 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.416 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:37.417 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:37.417 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:37.417 14:09:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.417 rmmod nvme_tcp 00:27:37.417 rmmod nvme_fabrics 00:27:37.417 rmmod nvme_keyring 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3126399 ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3126399 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3126399 ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3126399 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3126399 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3126399' 00:27:37.417 killing process with pid 3126399 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3126399 00:27:37.417 14:09:04 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3126399 00:27:39.328 14:09:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.328 14:09:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.328 14:09:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.328 14:09:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.328 14:09:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.328 14:09:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.328 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:39.328 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.239 14:09:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:41.239 00:27:41.239 real 0m21.832s 00:27:41.239 user 0m30.048s 00:27:41.239 sys 0m4.950s 00:27:41.239 14:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.239 14:09:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:41.239 ************************************ 00:27:41.239 END TEST nvmf_identify_passthru 00:27:41.239 ************************************ 00:27:41.239 14:09:08 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:41.239 14:09:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:41.239 14:09:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.239 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:27:41.239 ************************************ 00:27:41.239 START TEST nvmf_dif 00:27:41.239 ************************************ 00:27:41.239 14:09:08 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:41.239 * Looking for test storage... 00:27:41.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:41.239 14:09:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.239 14:09:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.239 14:09:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.239 14:09:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.239 14:09:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.239 14:09:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.239 14:09:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.239 14:09:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:41.239 14:09:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.239 14:09:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:41.239 14:09:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:41.239 14:09:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:41.239 14:09:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:41.239 14:09:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.239 14:09:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:41.239 14:09:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.239 14:09:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.239 14:09:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:46.524 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:46.524 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:46.524 Found net devices under 0000:86:00.0: cvl_0_0 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:46.524 Found net devices under 0000:86:00.1: cvl_0_1 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.524 14:09:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:27:46.525 00:27:46.525 --- 10.0.0.2 ping statistics --- 00:27:46.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.525 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:27:46.525 00:27:46.525 --- 10.0.0.1 ping statistics --- 00:27:46.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.525 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:46.525 14:09:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:49.064 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:49.064 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:49.064 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:49.325 14:09:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:49.325 14:09:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3132348 00:27:49.325 14:09:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3132348 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3132348 ']' 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.325 14:09:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:49.325 [2024-07-26 14:09:16.622556] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:27:49.325 [2024-07-26 14:09:16.622597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.325 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.325 [2024-07-26 14:09:16.679582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.325 [2024-07-26 14:09:16.759329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.325 [2024-07-26 14:09:16.759363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.325 [2024-07-26 14:09:16.759370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.325 [2024-07-26 14:09:16.759379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.326 [2024-07-26 14:09:16.759384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.326 [2024-07-26 14:09:16.759401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:27:50.306 14:09:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 14:09:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.306 14:09:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:50.306 14:09:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 [2024-07-26 14:09:17.462962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.306 14:09:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 ************************************ 00:27:50.306 START TEST fio_dif_1_default 00:27:50.306 ************************************ 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 bdev_null0 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:50.306 [2024-07-26 14:09:17.535250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:50.306 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.307 { 00:27:50.307 "params": { 00:27:50.307 "name": "Nvme$subsystem", 00:27:50.307 "trtype": "$TEST_TRANSPORT", 00:27:50.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.307 "adrfam": "ipv4", 00:27:50.307 "trsvcid": "$NVMF_PORT", 00:27:50.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.307 "hdgst": ${hdgst:-false}, 00:27:50.307 "ddgst": ${ddgst:-false} 00:27:50.307 }, 00:27:50.307 "method": "bdev_nvme_attach_controller" 00:27:50.307 } 00:27:50.307 EOF 00:27:50.307 )") 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:50.307 "params": { 00:27:50.307 "name": "Nvme0", 00:27:50.307 "trtype": "tcp", 00:27:50.307 "traddr": "10.0.0.2", 00:27:50.307 "adrfam": "ipv4", 00:27:50.307 "trsvcid": "4420", 00:27:50.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.307 "hdgst": false, 00:27:50.307 "ddgst": false 00:27:50.307 }, 00:27:50.307 "method": "bdev_nvme_attach_controller" 00:27:50.307 }' 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:50.307 14:09:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.566 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:50.566 fio-3.35 00:27:50.566 Starting 1 thread 00:27:50.566 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.805 00:28:02.805 filename0: (groupid=0, jobs=1): err= 0: pid=3132807: Fri Jul 26 14:09:28 2024 00:28:02.805 read: IOPS=94, BW=376KiB/s (385kB/s)(3776KiB/10042msec) 00:28:02.805 slat (nsec): min=5880, max=28654, avg=6239.84, stdev=1419.73 00:28:02.805 clat (usec): min=41801, max=45204, avg=42530.76, stdev=560.60 00:28:02.805 lat (usec): min=41807, max=45232, avg=42537.00, stdev=560.78 00:28:02.805 clat percentiles (usec): 00:28:02.805 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:28:02.805 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42730], 60.00th=[42730], 00:28:02.805 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:28:02.805 | 99.00th=[43779], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:28:02.805 | 99.99th=[45351] 00:28:02.805 bw ( KiB/s): min= 352, max= 384, per=99.99%, avg=376.00, stdev=14.22, samples=20 00:28:02.805 iops : min= 88, max= 96, avg=94.00, stdev= 3.55, samples=20 00:28:02.805 lat (msec) : 50=100.00% 00:28:02.805 cpu : usr=94.55%, sys=5.20%, ctx=8, majf=0, minf=230 00:28:02.805 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.805 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.805 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:02.805 00:28:02.805 Run status group 0 (all jobs): 00:28:02.805 READ: bw=376KiB/s (385kB/s), 376KiB/s-376KiB/s (385kB/s-385kB/s), io=3776KiB (3867kB), run=10042-10042msec 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 00:28:02.805 real 0m11.090s 00:28:02.805 user 0m16.655s 00:28:02.805 sys 0m0.800s 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 ************************************ 00:28:02.805 END TEST fio_dif_1_default 00:28:02.805 ************************************ 00:28:02.805 14:09:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:02.805 14:09:28 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:02.805 14:09:28 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 ************************************ 00:28:02.805 START TEST fio_dif_1_multi_subsystems 00:28:02.805 ************************************ 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 bdev_null0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 [2024-07-26 14:09:28.697198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 bdev_null1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.805 { 00:28:02.805 "params": { 00:28:02.805 "name": "Nvme$subsystem", 00:28:02.805 "trtype": "$TEST_TRANSPORT", 00:28:02.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.805 "adrfam": "ipv4", 00:28:02.805 "trsvcid": "$NVMF_PORT", 00:28:02.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.805 "hdgst": ${hdgst:-false}, 00:28:02.805 "ddgst": ${ddgst:-false} 00:28:02.805 }, 00:28:02.805 "method": "bdev_nvme_attach_controller" 00:28:02.805 } 00:28:02.805 EOF 00:28:02.805 )") 00:28:02.805 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.806 { 00:28:02.806 "params": { 00:28:02.806 "name": "Nvme$subsystem", 00:28:02.806 "trtype": "$TEST_TRANSPORT", 00:28:02.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.806 "adrfam": "ipv4", 00:28:02.806 "trsvcid": "$NVMF_PORT", 00:28:02.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.806 "hdgst": ${hdgst:-false}, 00:28:02.806 "ddgst": ${ddgst:-false} 00:28:02.806 }, 00:28:02.806 "method": "bdev_nvme_attach_controller" 00:28:02.806 } 00:28:02.806 EOF 00:28:02.806 )") 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.806 "params": { 00:28:02.806 "name": "Nvme0", 00:28:02.806 "trtype": "tcp", 00:28:02.806 "traddr": "10.0.0.2", 00:28:02.806 "adrfam": "ipv4", 00:28:02.806 "trsvcid": "4420", 00:28:02.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.806 "hdgst": false, 00:28:02.806 "ddgst": false 00:28:02.806 }, 00:28:02.806 "method": "bdev_nvme_attach_controller" 00:28:02.806 },{ 00:28:02.806 "params": { 00:28:02.806 "name": "Nvme1", 00:28:02.806 "trtype": "tcp", 00:28:02.806 "traddr": "10.0.0.2", 00:28:02.806 "adrfam": "ipv4", 00:28:02.806 "trsvcid": "4420", 00:28:02.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.806 "hdgst": false, 00:28:02.806 "ddgst": false 00:28:02.806 }, 00:28:02.806 "method": "bdev_nvme_attach_controller" 00:28:02.806 }' 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:02.806 14:09:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.806 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:02.806 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:02.806 fio-3.35 00:28:02.806 Starting 2 threads 00:28:02.806 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.798 00:28:12.798 filename0: (groupid=0, jobs=1): err= 0: pid=3134719: Fri Jul 26 14:09:39 2024 00:28:12.798 read: IOPS=177, BW=709KiB/s (726kB/s)(7120KiB/10036msec) 00:28:12.798 slat (nsec): min=3237, max=24287, avg=6985.38, stdev=2000.17 00:28:12.798 clat (usec): min=1931, max=47006, avg=22530.92, stdev=20429.38 00:28:12.798 lat (usec): min=1937, max=47018, avg=22537.91, stdev=20428.78 00:28:12.798 clat percentiles (usec): 00:28:12.798 | 1.00th=[ 1942], 5.00th=[ 1958], 10.00th=[ 1975], 20.00th=[ 1991], 00:28:12.798 | 30.00th=[ 2057], 40.00th=[ 2147], 50.00th=[41681], 60.00th=[42730], 00:28:12.798 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:28:12.798 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:28:12.798 | 99.99th=[46924] 00:28:12.798 bw ( KiB/s): min= 704, max= 736, per=65.49%, avg=710.40, stdev=13.13, samples=20 00:28:12.798 iops : min= 176, max= 184, avg=177.60, stdev= 3.28, samples=20 00:28:12.798 lat (msec) : 2=21.85%, 4=28.03%, 50=50.11% 00:28:12.798 cpu : usr=97.85%, sys=1.88%, ctx=14, majf=0, minf=136 00:28:12.798 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.799 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.799 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:12.799 filename1: (groupid=0, jobs=1): err= 0: pid=3134720: Fri Jul 26 14:09:39 2024 00:28:12.799 read: IOPS=93, BW=376KiB/s (385kB/s)(3760KiB/10010msec) 00:28:12.799 slat (nsec): min=4282, max=28677, avg=7682.41, stdev=2762.19 00:28:12.799 clat (usec): min=41833, max=48207, avg=42569.89, stdev=625.68 00:28:12.799 lat (usec): min=41840, max=48220, avg=42577.57, stdev=625.68 00:28:12.799 clat percentiles (usec): 00:28:12.799 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:28:12.799 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42730], 60.00th=[42730], 00:28:12.799 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:28:12.799 | 99.00th=[43779], 99.50th=[44303], 99.90th=[47973], 99.95th=[47973], 00:28:12.799 | 99.99th=[47973] 00:28:12.799 bw ( KiB/s): min= 352, max= 384, per=34.50%, avg=374.40, stdev=15.05, samples=20 00:28:12.799 iops : min= 88, max= 96, avg=93.60, stdev= 3.76, samples=20 00:28:12.799 lat (msec) : 50=100.00% 00:28:12.799 cpu : usr=97.65%, sys=2.08%, ctx=14, majf=0, minf=128 00:28:12.799 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.799 issued rwts: total=940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.799 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:12.799 00:28:12.799 Run status group 0 (all jobs): 00:28:12.799 READ: bw=1084KiB/s (1110kB/s), 376KiB/s-709KiB/s (385kB/s-726kB/s), io=10.6MiB (11.1MB), run=10010-10036msec 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 00:28:12.799 real 0m11.296s 00:28:12.799 user 0m26.172s 00:28:12.799 sys 0m0.666s 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.799 14:09:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 ************************************ 00:28:12.799 END TEST fio_dif_1_multi_subsystems 00:28:12.799 ************************************ 00:28:12.799 14:09:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:12.799 14:09:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:12.799 14:09:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.799 14:09:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 ************************************ 00:28:12.799 START TEST fio_dif_rand_params 00:28:12.799 ************************************ 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 bdev_null0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 [2024-07-26 14:09:40.064058] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.799 { 00:28:12.799 "params": { 00:28:12.799 "name": "Nvme$subsystem", 00:28:12.799 "trtype": "$TEST_TRANSPORT", 00:28:12.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.799 "adrfam": "ipv4", 00:28:12.799 "trsvcid": "$NVMF_PORT", 00:28:12.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.799 "hdgst": ${hdgst:-false}, 00:28:12.799 "ddgst": ${ddgst:-false} 00:28:12.799 }, 00:28:12.799 "method": "bdev_nvme_attach_controller" 00:28:12.799 } 00:28:12.799 EOF 00:28:12.799 )") 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:12.799 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.800 "params": { 00:28:12.800 "name": "Nvme0", 00:28:12.800 "trtype": "tcp", 00:28:12.800 "traddr": "10.0.0.2", 00:28:12.800 "adrfam": "ipv4", 00:28:12.800 "trsvcid": "4420", 00:28:12.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.800 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:12.800 "hdgst": false, 00:28:12.800 "ddgst": false 00:28:12.800 }, 00:28:12.800 "method": "bdev_nvme_attach_controller" 00:28:12.800 }' 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:12.800 14:09:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.059 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:13.059 ... 00:28:13.059 fio-3.35 00:28:13.059 Starting 3 threads 00:28:13.059 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.651 00:28:19.651 filename0: (groupid=0, jobs=1): err= 0: pid=3136659: Fri Jul 26 14:09:45 2024 00:28:19.651 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(124MiB/5012msec) 00:28:19.651 slat (nsec): min=6189, max=27750, avg=9062.31, stdev=3010.30 00:28:19.651 clat (usec): min=5627, max=60441, avg=15175.59, stdev=14481.07 00:28:19.651 lat (usec): min=5634, max=60447, avg=15184.66, stdev=14481.24 00:28:19.651 clat percentiles (usec): 00:28:19.651 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8160], 00:28:19.651 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10290], 00:28:19.651 | 70.00th=[11207], 80.00th=[13173], 90.00th=[50594], 95.00th=[52167], 00:28:19.651 | 99.00th=[56886], 99.50th=[60031], 99.90th=[60556], 99.95th=[60556], 00:28:19.651 | 99.99th=[60556] 00:28:19.651 bw ( KiB/s): min=15360, max=36096, per=30.98%, avg=25267.20, stdev=5853.95, samples=10 00:28:19.651 iops : min= 120, max= 282, avg=197.40, stdev=45.73, samples=10 00:28:19.651 lat (msec) : 10=55.51%, 20=31.45%, 50=1.92%, 100=11.12% 00:28:19.651 cpu : usr=95.09%, sys=4.23%, ctx=7, majf=0, minf=72 00:28:19.651 IO depths : 1=5.3%, 2=94.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.651 issued rwts: total=989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.651 filename0: (groupid=0, jobs=1): err= 0: pid=3136660: Fri Jul 26 14:09:45 2024 00:28:19.651 read: IOPS=152, BW=19.0MiB/s (20.0MB/s)(96.1MiB/5046msec) 00:28:19.651 slat (nsec): min=6217, max=24946, avg=9840.62, stdev=2712.90 00:28:19.651 clat (usec): min=6073, max=95706, avg=19613.66, stdev=18651.15 00:28:19.651 lat (usec): min=6081, max=95718, avg=19623.50, stdev=18651.34 00:28:19.651 clat percentiles (usec): 00:28:19.651 | 1.00th=[ 6194], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 8455], 00:28:19.651 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10945], 60.00th=[11863], 00:28:19.651 | 70.00th=[13829], 80.00th=[49021], 90.00th=[52691], 95.00th=[56361], 00:28:19.651 | 99.00th=[67634], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:28:19.651 | 99.99th=[95945] 00:28:19.651 bw ( KiB/s): min=16160, max=24576, per=24.05%, avg=19612.80, stdev=2887.51, samples=10 00:28:19.651 iops : min= 126, max= 192, avg=153.20, stdev=22.59, samples=10 00:28:19.651 lat (msec) : 10=38.75%, 20=40.57%, 50=2.21%, 100=18.47% 00:28:19.651 cpu : usr=96.02%, sys=3.51%, ctx=8, majf=0, minf=70 00:28:19.651 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.651 issued rwts: total=769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.651 filename0: (groupid=0, jobs=1): err= 0: pid=3136661: Fri Jul 26 14:09:45 2024 00:28:19.651 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(182MiB/5045msec) 00:28:19.651 slat (nsec): min=6230, max=26586, avg=9096.23, stdev=2660.14 00:28:19.651 clat (usec): min=5565, max=95987, avg=10345.06, stdev=9204.24 00:28:19.651 lat (usec): min=5572, max=95995, avg=10354.16, stdev=9204.42 00:28:19.651 clat percentiles (usec): 00:28:19.651 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 7177], 00:28:19.651 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8586], 00:28:19.651 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[11338], 95.00th=[16057], 00:28:19.651 | 99.00th=[52691], 99.50th=[55837], 99.90th=[57934], 99.95th=[95945], 00:28:19.651 | 99.99th=[95945] 00:28:19.651 bw ( KiB/s): min=28416, max=47360, per=45.67%, avg=37248.00, stdev=6198.57, samples=10 00:28:19.651 iops : min= 222, max= 370, avg=291.00, stdev=48.43, samples=10 00:28:19.651 lat (msec) : 10=81.88%, 20=13.73%, 50=1.30%, 100=3.09% 00:28:19.651 cpu : usr=94.61%, sys=4.58%, ctx=14, majf=0, minf=132 00:28:19.651 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.651 issued rwts: total=1457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.651 00:28:19.651 Run status group 0 (all jobs): 00:28:19.651 READ: bw=79.6MiB/s (83.5MB/s), 19.0MiB/s-36.1MiB/s (20.0MB/s-37.9MB/s), io=402MiB (421MB), run=5012-5046msec 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 bdev_null0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 [2024-07-26 14:09:46.121741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.651 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.651 bdev_null1 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 bdev_null2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.652 { 00:28:19.652 "params": { 00:28:19.652 "name": "Nvme$subsystem", 00:28:19.652 "trtype": "$TEST_TRANSPORT", 00:28:19.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.652 "adrfam": "ipv4", 00:28:19.652 "trsvcid": "$NVMF_PORT", 00:28:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.652 "hdgst": ${hdgst:-false}, 00:28:19.652 "ddgst": ${ddgst:-false} 00:28:19.652 }, 00:28:19.652 "method": "bdev_nvme_attach_controller" 00:28:19.652 } 00:28:19.652 EOF 00:28:19.652 )") 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.652 { 00:28:19.652 "params": { 00:28:19.652 "name": "Nvme$subsystem", 00:28:19.652 "trtype": "$TEST_TRANSPORT", 00:28:19.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.652 "adrfam": "ipv4", 00:28:19.652 "trsvcid": "$NVMF_PORT", 00:28:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.652 "hdgst": ${hdgst:-false}, 00:28:19.652 "ddgst": ${ddgst:-false} 00:28:19.652 }, 00:28:19.652 "method": "bdev_nvme_attach_controller" 00:28:19.652 } 00:28:19.652 EOF 00:28:19.652 )") 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.652 { 00:28:19.652 "params": { 00:28:19.652 "name": "Nvme$subsystem", 00:28:19.652 "trtype": "$TEST_TRANSPORT", 00:28:19.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.652 "adrfam": "ipv4", 00:28:19.652 "trsvcid": "$NVMF_PORT", 00:28:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.652 "hdgst": ${hdgst:-false}, 00:28:19.652 "ddgst": ${ddgst:-false} 00:28:19.652 }, 00:28:19.652 "method": "bdev_nvme_attach_controller" 00:28:19.652 } 00:28:19.652 EOF 00:28:19.652 )") 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:19.652 "params": { 00:28:19.652 "name": "Nvme0", 00:28:19.652 "trtype": "tcp", 00:28:19.652 "traddr": "10.0.0.2", 00:28:19.652 "adrfam": "ipv4", 00:28:19.652 "trsvcid": "4420", 00:28:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:19.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:19.652 "hdgst": false, 00:28:19.652 "ddgst": false 00:28:19.652 }, 00:28:19.652 "method": "bdev_nvme_attach_controller" 00:28:19.652 },{ 00:28:19.652 "params": { 00:28:19.652 "name": "Nvme1", 00:28:19.652 "trtype": "tcp", 00:28:19.652 "traddr": "10.0.0.2", 00:28:19.652 "adrfam": "ipv4", 00:28:19.652 "trsvcid": "4420", 00:28:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.652 "hdgst": false, 00:28:19.652 "ddgst": false 00:28:19.652 }, 00:28:19.652 "method": "bdev_nvme_attach_controller" 00:28:19.652 },{ 00:28:19.652 "params": { 00:28:19.652 "name": "Nvme2", 00:28:19.652 "trtype": "tcp", 00:28:19.652 "traddr": "10.0.0.2", 00:28:19.652 "adrfam": "ipv4", 00:28:19.652 "trsvcid": "4420", 00:28:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.652 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.652 "hdgst": false, 00:28:19.652 "ddgst": false 00:28:19.652 }, 00:28:19.652 "method": "bdev_nvme_attach_controller" 00:28:19.652 }' 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:19.652 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:19.652 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:19.652 ... 00:28:19.652 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:19.652 ... 00:28:19.652 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:19.652 ... 00:28:19.652 fio-3.35 00:28:19.652 Starting 24 threads 00:28:19.652 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.871 00:28:31.871 filename0: (groupid=0, jobs=1): err= 0: pid=3137920: Fri Jul 26 14:09:57 2024 00:28:31.871 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.4MiB/10153msec) 00:28:31.871 slat (usec): min=6, max=932, avg=30.35, stdev=21.02 00:28:31.871 clat (msec): min=7, max=175, avg=26.99, stdev= 9.34 00:28:31.871 lat (msec): min=7, max=175, avg=27.02, stdev= 9.34 00:28:31.871 clat percentiles (msec): 00:28:31.871 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.871 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:28:31.871 | 70.00th=[ 29], 80.00th=[ 32], 90.00th=[ 34], 95.00th=[ 37], 00:28:31.871 | 99.00th=[ 44], 99.50th=[ 47], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.871 | 99.99th=[ 176] 00:28:31.871 bw ( KiB/s): min= 1920, max= 2624, per=4.11%, avg=2384.50, stdev=179.93, samples=20 00:28:31.871 iops : min= 480, max= 656, avg=596.10, stdev=44.96, samples=20 00:28:31.871 lat (msec) : 10=0.23%, 20=4.47%, 50=95.03%, 250=0.27% 00:28:31.871 cpu : usr=94.31%, sys=2.50%, ctx=147, majf=0, minf=49 00:28:31.871 IO depths : 1=0.9%, 2=1.8%, 4=9.9%, 8=74.3%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:31.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 complete : 0=0.0%, 4=90.8%, 8=5.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.871 filename0: (groupid=0, jobs=1): err= 0: pid=3137921: Fri Jul 26 14:09:57 2024 00:28:31.871 read: IOPS=615, BW=2461KiB/s (2520kB/s)(24.5MiB/10176msec) 00:28:31.871 slat (nsec): min=6337, max=93947, avg=31400.05, stdev=17064.20 00:28:31.871 clat (msec): min=3, max=192, avg=25.66, stdev= 8.04 00:28:31.871 lat (msec): min=3, max=192, avg=25.69, stdev= 8.04 00:28:31.871 clat percentiles (msec): 00:28:31.871 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 23], 00:28:31.871 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.871 | 70.00th=[ 26], 80.00th=[ 29], 90.00th=[ 32], 95.00th=[ 35], 00:28:31.871 | 99.00th=[ 40], 99.50th=[ 42], 99.90th=[ 192], 99.95th=[ 192], 00:28:31.871 | 99.99th=[ 192] 00:28:31.871 bw ( KiB/s): min= 2176, max= 2928, per=4.31%, avg=2500.65, stdev=176.82, samples=20 00:28:31.871 iops : min= 544, max= 732, avg=625.15, stdev=44.20, samples=20 00:28:31.871 lat (msec) : 4=0.06%, 10=0.70%, 20=3.56%, 50=95.51%, 250=0.16% 00:28:31.871 cpu : usr=98.21%, sys=1.22%, ctx=57, majf=0, minf=35 00:28:31.871 IO depths : 1=1.4%, 2=2.9%, 4=10.3%, 8=73.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:31.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 issued rwts: total=6260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.871 filename0: (groupid=0, jobs=1): err= 0: pid=3137922: Fri Jul 26 14:09:57 2024 00:28:31.871 read: IOPS=642, BW=2569KiB/s (2631kB/s)(25.5MiB/10148msec) 00:28:31.871 slat (nsec): min=6283, max=98250, avg=34521.61, stdev=17729.08 00:28:31.871 clat (msec): min=6, max=175, avg=24.66, stdev= 8.41 00:28:31.871 lat (msec): min=6, max=175, avg=24.69, stdev= 8.41 00:28:31.871 clat percentiles (msec): 00:28:31.871 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 23], 00:28:31.871 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:28:31.871 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 29], 95.00th=[ 32], 00:28:31.871 | 99.00th=[ 40], 99.50th=[ 45], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.871 | 99.99th=[ 176] 00:28:31.871 bw ( KiB/s): min= 2288, max= 2816, per=4.48%, avg=2600.50, stdev=145.49, samples=20 00:28:31.871 iops : min= 572, max= 704, avg=650.10, stdev=36.36, samples=20 00:28:31.871 lat (msec) : 10=0.35%, 20=6.51%, 50=92.90%, 250=0.25% 00:28:31.871 cpu : usr=98.13%, sys=1.07%, ctx=39, majf=0, minf=52 00:28:31.871 IO depths : 1=2.5%, 2=5.8%, 4=18.1%, 8=62.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:28:31.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 complete : 0=0.0%, 4=93.6%, 8=1.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 issued rwts: total=6518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.871 filename0: (groupid=0, jobs=1): err= 0: pid=3137923: Fri Jul 26 14:09:57 2024 00:28:31.871 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10013msec) 00:28:31.871 slat (nsec): min=4197, max=98311, avg=27911.34, stdev=17802.30 00:28:31.871 clat (usec): min=2325, max=50366, avg=23532.43, stdev=4956.57 00:28:31.871 lat (usec): min=2331, max=50397, avg=23560.34, stdev=4960.61 00:28:31.871 clat percentiles (usec): 00:28:31.871 | 1.00th=[ 5145], 5.00th=[15533], 10.00th=[17433], 20.00th=[21890], 00:28:31.871 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[24249], 00:28:31.871 | 70.00th=[24511], 80.00th=[25560], 90.00th=[28967], 95.00th=[31327], 00:28:31.871 | 99.00th=[38536], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:28:31.871 | 99.99th=[50594] 00:28:31.871 bw ( KiB/s): min= 2352, max= 3632, per=4.65%, avg=2699.15, stdev=327.03, samples=20 00:28:31.871 iops : min= 588, max= 908, avg=674.70, stdev=81.74, samples=20 00:28:31.871 lat (msec) : 4=0.81%, 10=0.95%, 20=14.15%, 50=84.06%, 100=0.03% 00:28:31.871 cpu : usr=98.58%, sys=1.00%, ctx=15, majf=0, minf=59 00:28:31.871 IO depths : 1=1.1%, 2=2.2%, 4=9.0%, 8=75.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:31.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.871 issued rwts: total=6762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.871 filename0: (groupid=0, jobs=1): err= 0: pid=3137924: Fri Jul 26 14:09:57 2024 00:28:31.871 read: IOPS=647, BW=2592KiB/s (2654kB/s)(25.8MiB/10174msec) 00:28:31.871 slat (nsec): min=6210, max=98331, avg=21539.35, stdev=15635.27 00:28:31.871 clat (msec): min=8, max=174, avg=24.52, stdev= 8.81 00:28:31.872 lat (msec): min=8, max=174, avg=24.54, stdev= 8.81 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 22], 00:28:31.872 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:28:31.872 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 31], 95.00th=[ 34], 00:28:31.872 | 99.00th=[ 41], 99.50th=[ 44], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.872 | 99.99th=[ 176] 00:28:31.872 bw ( KiB/s): min= 2296, max= 2920, per=4.53%, avg=2629.80, stdev=184.55, samples=20 00:28:31.872 iops : min= 574, max= 730, avg=657.40, stdev=46.14, samples=20 00:28:31.872 lat (msec) : 10=0.41%, 20=13.44%, 50=85.66%, 100=0.24%, 250=0.24% 00:28:31.872 cpu : usr=96.47%, sys=1.58%, ctx=68, majf=0, minf=36 00:28:31.872 IO depths : 1=2.2%, 2=4.5%, 4=14.1%, 8=67.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=91.8%, 8=3.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename0: (groupid=0, jobs=1): err= 0: pid=3137925: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=593, BW=2376KiB/s (2433kB/s)(23.5MiB/10132msec) 00:28:31.872 slat (nsec): min=6284, max=67650, avg=26362.93, stdev=14595.77 00:28:31.872 clat (msec): min=10, max=179, avg=26.80, stdev= 9.08 00:28:31.872 lat (msec): min=10, max=179, avg=26.83, stdev= 9.08 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.872 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:28:31.872 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 36], 00:28:31.872 | 99.00th=[ 42], 99.50th=[ 46], 99.90th=[ 178], 99.95th=[ 178], 00:28:31.872 | 99.99th=[ 180] 00:28:31.872 bw ( KiB/s): min= 2128, max= 2696, per=4.14%, avg=2400.50, stdev=122.99, samples=20 00:28:31.872 iops : min= 532, max= 674, avg=600.10, stdev=30.71, samples=20 00:28:31.872 lat (msec) : 20=3.59%, 50=96.14%, 250=0.27% 00:28:31.872 cpu : usr=98.73%, sys=0.88%, ctx=51, majf=0, minf=39 00:28:31.872 IO depths : 1=0.1%, 2=0.4%, 4=9.0%, 8=75.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=91.3%, 8=5.6%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=6018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename0: (groupid=0, jobs=1): err= 0: pid=3137926: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=598, BW=2393KiB/s (2451kB/s)(23.7MiB/10145msec) 00:28:31.872 slat (nsec): min=6355, max=96221, avg=33947.90, stdev=18678.12 00:28:31.872 clat (msec): min=11, max=159, avg=26.52, stdev= 8.07 00:28:31.872 lat (msec): min=11, max=159, avg=26.56, stdev= 8.07 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:28:31.872 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.872 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 33], 95.00th=[ 37], 00:28:31.872 | 99.00th=[ 42], 99.50th=[ 45], 99.90th=[ 159], 99.95th=[ 161], 00:28:31.872 | 99.99th=[ 161] 00:28:31.872 bw ( KiB/s): min= 2200, max= 2608, per=4.17%, avg=2421.30, stdev=120.14, samples=20 00:28:31.872 iops : min= 550, max= 652, avg=605.30, stdev=30.01, samples=20 00:28:31.872 lat (msec) : 20=3.23%, 50=96.39%, 100=0.12%, 250=0.26% 00:28:31.872 cpu : usr=98.52%, sys=0.85%, ctx=14, majf=0, minf=34 00:28:31.872 IO depths : 1=0.2%, 2=0.7%, 4=8.3%, 8=76.9%, 16=14.0%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=6070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename0: (groupid=0, jobs=1): err= 0: pid=3137927: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=590, BW=2362KiB/s (2419kB/s)(23.4MiB/10137msec) 00:28:31.872 slat (nsec): min=6128, max=91857, avg=29566.04, stdev=18315.76 00:28:31.872 clat (msec): min=10, max=187, avg=26.94, stdev= 8.64 00:28:31.872 lat (msec): min=10, max=187, avg=26.97, stdev= 8.64 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.872 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:28:31.872 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 36], 00:28:31.872 | 99.00th=[ 46], 99.50th=[ 52], 99.90th=[ 161], 99.95th=[ 161], 00:28:31.872 | 99.99th=[ 188] 00:28:31.872 bw ( KiB/s): min= 2160, max= 2560, per=4.12%, avg=2388.10, stdev=118.10, samples=20 00:28:31.872 iops : min= 540, max= 640, avg=597.00, stdev=29.50, samples=20 00:28:31.872 lat (msec) : 20=3.79%, 50=95.67%, 100=0.27%, 250=0.27% 00:28:31.872 cpu : usr=97.12%, sys=1.37%, ctx=52, majf=0, minf=24 00:28:31.872 IO depths : 1=0.2%, 2=0.6%, 4=6.7%, 8=78.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=90.1%, 8=6.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=5987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename1: (groupid=0, jobs=1): err= 0: pid=3137928: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=590, BW=2363KiB/s (2420kB/s)(23.4MiB/10148msec) 00:28:31.872 slat (nsec): min=6451, max=91437, avg=27191.53, stdev=14978.06 00:28:31.872 clat (msec): min=9, max=184, avg=26.93, stdev= 9.53 00:28:31.872 lat (msec): min=9, max=184, avg=26.96, stdev= 9.53 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 15], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.872 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:28:31.872 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 33], 95.00th=[ 36], 00:28:31.872 | 99.00th=[ 43], 99.50th=[ 48], 99.90th=[ 184], 99.95th=[ 184], 00:28:31.872 | 99.99th=[ 184] 00:28:31.872 bw ( KiB/s): min= 2080, max= 2728, per=4.12%, avg=2391.35, stdev=142.56, samples=20 00:28:31.872 iops : min= 520, max= 682, avg=597.80, stdev=35.62, samples=20 00:28:31.872 lat (msec) : 10=0.07%, 20=4.70%, 50=94.83%, 100=0.13%, 250=0.27% 00:28:31.872 cpu : usr=98.48%, sys=1.04%, ctx=95, majf=0, minf=32 00:28:31.872 IO depths : 1=0.8%, 2=1.6%, 4=10.3%, 8=73.9%, 16=13.5%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=5995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename1: (groupid=0, jobs=1): err= 0: pid=3137929: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=581, BW=2327KiB/s (2383kB/s)(23.0MiB/10128msec) 00:28:31.872 slat (nsec): min=6259, max=93842, avg=31680.99, stdev=18582.44 00:28:31.872 clat (msec): min=7, max=174, avg=27.30, stdev= 9.07 00:28:31.872 lat (msec): min=7, max=174, avg=27.34, stdev= 9.07 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.872 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 28], 00:28:31.872 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 36], 00:28:31.872 | 99.00th=[ 43], 99.50th=[ 53], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.872 | 99.99th=[ 176] 00:28:31.872 bw ( KiB/s): min= 2016, max= 2656, per=4.05%, avg=2350.50, stdev=171.04, samples=20 00:28:31.872 iops : min= 504, max= 664, avg=587.55, stdev=42.78, samples=20 00:28:31.872 lat (msec) : 10=0.10%, 20=3.17%, 50=96.18%, 100=0.27%, 250=0.27% 00:28:31.872 cpu : usr=97.17%, sys=1.37%, ctx=50, majf=0, minf=30 00:28:31.872 IO depths : 1=0.1%, 2=0.4%, 4=11.0%, 8=74.2%, 16=14.2%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=91.8%, 8=4.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=5893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename1: (groupid=0, jobs=1): err= 0: pid=3137931: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=558, BW=2234KiB/s (2288kB/s)(22.1MiB/10130msec) 00:28:31.872 slat (usec): min=6, max=1252, avg=29.69, stdev=25.01 00:28:31.872 clat (msec): min=11, max=169, avg=28.38, stdev= 7.91 00:28:31.872 lat (msec): min=11, max=169, avg=28.41, stdev= 7.90 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.872 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 28], 60.00th=[ 29], 00:28:31.872 | 70.00th=[ 31], 80.00th=[ 33], 90.00th=[ 35], 95.00th=[ 38], 00:28:31.872 | 99.00th=[ 46], 99.50th=[ 48], 99.90th=[ 169], 99.95th=[ 169], 00:28:31.872 | 99.99th=[ 169] 00:28:31.872 bw ( KiB/s): min= 1952, max= 2504, per=3.89%, avg=2256.55, stdev=158.93, samples=20 00:28:31.872 iops : min= 488, max= 626, avg=564.10, stdev=39.75, samples=20 00:28:31.872 lat (msec) : 20=2.39%, 50=97.33%, 250=0.28% 00:28:31.872 cpu : usr=97.99%, sys=1.02%, ctx=41, majf=0, minf=33 00:28:31.872 IO depths : 1=0.2%, 2=0.5%, 4=8.9%, 8=76.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:28:31.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 complete : 0=0.0%, 4=90.9%, 8=5.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.872 issued rwts: total=5658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.872 filename1: (groupid=0, jobs=1): err= 0: pid=3137932: Fri Jul 26 14:09:57 2024 00:28:31.872 read: IOPS=593, BW=2372KiB/s (2429kB/s)(23.5MiB/10141msec) 00:28:31.872 slat (usec): min=6, max=1345, avg=26.30, stdev=28.80 00:28:31.872 clat (msec): min=10, max=174, avg=26.84, stdev= 8.59 00:28:31.872 lat (msec): min=10, max=174, avg=26.86, stdev= 8.59 00:28:31.872 clat percentiles (msec): 00:28:31.872 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.872 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:28:31.872 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 33], 95.00th=[ 36], 00:28:31.872 | 99.00th=[ 42], 99.50th=[ 45], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.872 | 99.99th=[ 176] 00:28:31.872 bw ( KiB/s): min= 2195, max= 2560, per=4.13%, avg=2398.95, stdev=110.96, samples=20 00:28:31.872 iops : min= 548, max= 640, avg=599.70, stdev=27.81, samples=20 00:28:31.872 lat (msec) : 20=1.80%, 50=97.94%, 250=0.27% 00:28:31.873 cpu : usr=86.95%, sys=5.30%, ctx=242, majf=0, minf=31 00:28:31.873 IO depths : 1=0.8%, 2=1.8%, 4=8.7%, 8=74.9%, 16=13.8%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=90.7%, 8=5.5%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename1: (groupid=0, jobs=1): err= 0: pid=3137933: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=589, BW=2359KiB/s (2416kB/s)(23.4MiB/10169msec) 00:28:31.873 slat (nsec): min=6440, max=79191, avg=25816.45, stdev=15044.21 00:28:31.873 clat (msec): min=9, max=207, avg=26.90, stdev= 9.60 00:28:31.873 lat (msec): min=9, max=207, avg=26.92, stdev= 9.60 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:28:31.873 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 36], 00:28:31.873 | 99.00th=[ 46], 99.50th=[ 51], 99.90th=[ 176], 99.95th=[ 207], 00:28:31.873 | 99.99th=[ 207] 00:28:31.873 bw ( KiB/s): min= 2208, max= 2576, per=4.12%, avg=2392.75, stdev=114.84, samples=20 00:28:31.873 iops : min= 552, max= 644, avg=598.15, stdev=28.74, samples=20 00:28:31.873 lat (msec) : 10=0.10%, 20=5.22%, 50=94.18%, 100=0.23%, 250=0.27% 00:28:31.873 cpu : usr=98.61%, sys=0.99%, ctx=52, majf=0, minf=32 00:28:31.873 IO depths : 1=0.7%, 2=1.7%, 4=10.9%, 8=73.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=91.4%, 8=4.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=5998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename1: (groupid=0, jobs=1): err= 0: pid=3137934: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=592, BW=2372KiB/s (2429kB/s)(23.5MiB/10148msec) 00:28:31.873 slat (nsec): min=6562, max=71612, avg=27252.38, stdev=15185.22 00:28:31.873 clat (msec): min=9, max=175, avg=26.84, stdev= 9.16 00:28:31.873 lat (msec): min=9, max=175, avg=26.86, stdev= 9.16 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 15], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:28:31.873 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 36], 00:28:31.873 | 99.00th=[ 45], 99.50th=[ 47], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.873 | 99.99th=[ 176] 00:28:31.873 bw ( KiB/s): min= 2224, max= 2584, per=4.14%, avg=2400.15, stdev=109.10, samples=20 00:28:31.873 iops : min= 556, max= 646, avg=600.00, stdev=27.31, samples=20 00:28:31.873 lat (msec) : 10=0.12%, 20=4.60%, 50=94.95%, 100=0.07%, 250=0.27% 00:28:31.873 cpu : usr=98.22%, sys=1.21%, ctx=97, majf=0, minf=33 00:28:31.873 IO depths : 1=0.5%, 2=1.4%, 4=9.9%, 8=75.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename1: (groupid=0, jobs=1): err= 0: pid=3137935: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.9MiB/10153msec) 00:28:31.873 slat (nsec): min=6391, max=87306, avg=27979.56, stdev=14096.07 00:28:31.873 clat (msec): min=11, max=167, avg=26.30, stdev= 8.16 00:28:31.873 lat (msec): min=12, max=167, avg=26.33, stdev= 8.16 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.873 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 33], 95.00th=[ 35], 00:28:31.873 | 99.00th=[ 41], 99.50th=[ 46], 99.90th=[ 167], 99.95th=[ 167], 00:28:31.873 | 99.99th=[ 167] 00:28:31.873 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2444.55, stdev=118.66, samples=20 00:28:31.873 iops : min= 544, max= 640, avg=611.10, stdev=29.66, samples=20 00:28:31.873 lat (msec) : 20=4.01%, 50=95.72%, 250=0.26% 00:28:31.873 cpu : usr=97.94%, sys=1.37%, ctx=170, majf=0, minf=41 00:28:31.873 IO depths : 1=1.8%, 2=3.6%, 4=11.9%, 8=71.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename1: (groupid=0, jobs=1): err= 0: pid=3137936: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=619, BW=2478KiB/s (2537kB/s)(24.6MiB/10173msec) 00:28:31.873 slat (nsec): min=6533, max=77335, avg=25849.48, stdev=13921.90 00:28:31.873 clat (msec): min=10, max=174, avg=25.53, stdev= 7.28 00:28:31.873 lat (msec): min=10, max=174, avg=25.56, stdev= 7.28 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:28:31.873 | 70.00th=[ 26], 80.00th=[ 28], 90.00th=[ 31], 95.00th=[ 34], 00:28:31.873 | 99.00th=[ 42], 99.50th=[ 45], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.873 | 99.99th=[ 176] 00:28:31.873 bw ( KiB/s): min= 2176, max= 2672, per=4.34%, avg=2516.40, stdev=113.59, samples=20 00:28:31.873 iops : min= 544, max= 668, avg=629.05, stdev=28.37, samples=20 00:28:31.873 lat (msec) : 20=2.55%, 50=97.03%, 100=0.25%, 250=0.16% 00:28:31.873 cpu : usr=92.64%, sys=3.12%, ctx=190, majf=0, minf=36 00:28:31.873 IO depths : 1=1.0%, 2=2.1%, 4=8.9%, 8=75.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename2: (groupid=0, jobs=1): err= 0: pid=3137937: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=606, BW=2426KiB/s (2485kB/s)(24.0MiB/10148msec) 00:28:31.873 slat (nsec): min=6440, max=69081, avg=19488.53, stdev=12289.89 00:28:31.873 clat (msec): min=10, max=174, avg=26.24, stdev= 9.10 00:28:31.873 lat (msec): min=10, max=174, avg=26.26, stdev= 9.10 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 23], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.873 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 33], 95.00th=[ 35], 00:28:31.873 | 99.00th=[ 41], 99.50th=[ 46], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.873 | 99.99th=[ 176] 00:28:31.873 bw ( KiB/s): min= 2176, max= 2640, per=4.23%, avg=2455.75, stdev=115.38, samples=20 00:28:31.873 iops : min= 544, max= 660, avg=613.90, stdev=28.90, samples=20 00:28:31.873 lat (msec) : 20=8.28%, 50=91.46%, 250=0.26% 00:28:31.873 cpu : usr=98.56%, sys=1.08%, ctx=17, majf=0, minf=28 00:28:31.873 IO depths : 1=2.2%, 2=4.8%, 4=16.8%, 8=65.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=92.6%, 8=2.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename2: (groupid=0, jobs=1): err= 0: pid=3137938: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=600, BW=2403KiB/s (2461kB/s)(23.8MiB/10133msec) 00:28:31.873 slat (usec): min=6, max=624, avg=32.79, stdev=19.60 00:28:31.873 clat (msec): min=11, max=174, avg=26.44, stdev= 8.80 00:28:31.873 lat (msec): min=11, max=174, avg=26.47, stdev= 8.80 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.873 | 70.00th=[ 27], 80.00th=[ 30], 90.00th=[ 33], 95.00th=[ 36], 00:28:31.873 | 99.00th=[ 41], 99.50th=[ 46], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.873 | 99.99th=[ 176] 00:28:31.873 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2427.90, stdev=113.19, samples=20 00:28:31.873 iops : min= 544, max= 640, avg=606.90, stdev=28.29, samples=20 00:28:31.873 lat (msec) : 20=2.66%, 50=97.01%, 100=0.07%, 250=0.26% 00:28:31.873 cpu : usr=96.06%, sys=1.89%, ctx=68, majf=0, minf=35 00:28:31.873 IO depths : 1=1.2%, 2=2.6%, 4=10.7%, 8=73.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename2: (groupid=0, jobs=1): err= 0: pid=3137939: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=597, BW=2391KiB/s (2448kB/s)(23.7MiB/10133msec) 00:28:31.873 slat (usec): min=6, max=146, avg=31.82, stdev=18.97 00:28:31.873 clat (msec): min=7, max=175, avg=26.57, stdev= 8.92 00:28:31.873 lat (msec): min=7, max=175, avg=26.60, stdev= 8.92 00:28:31.873 clat percentiles (msec): 00:28:31.873 | 1.00th=[ 16], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.873 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:28:31.873 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 33], 95.00th=[ 35], 00:28:31.873 | 99.00th=[ 43], 99.50th=[ 46], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.873 | 99.99th=[ 176] 00:28:31.873 bw ( KiB/s): min= 1968, max= 2688, per=4.17%, avg=2416.15, stdev=160.74, samples=20 00:28:31.873 iops : min= 492, max= 672, avg=604.00, stdev=40.17, samples=20 00:28:31.873 lat (msec) : 10=0.02%, 20=4.31%, 50=95.36%, 100=0.05%, 250=0.26% 00:28:31.873 cpu : usr=97.01%, sys=1.51%, ctx=63, majf=0, minf=27 00:28:31.873 IO depths : 1=0.9%, 2=2.1%, 4=11.2%, 8=73.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:28:31.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.873 issued rwts: total=6057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.873 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.873 filename2: (groupid=0, jobs=1): err= 0: pid=3137940: Fri Jul 26 14:09:57 2024 00:28:31.873 read: IOPS=609, BW=2438KiB/s (2496kB/s)(24.2MiB/10152msec) 00:28:31.874 slat (nsec): min=6571, max=87212, avg=28220.04, stdev=15366.56 00:28:31.874 clat (msec): min=10, max=175, avg=26.08, stdev= 9.02 00:28:31.874 lat (msec): min=10, max=175, avg=26.11, stdev= 9.02 00:28:31.874 clat percentiles (msec): 00:28:31.874 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 23], 00:28:31.874 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.874 | 70.00th=[ 27], 80.00th=[ 30], 90.00th=[ 33], 95.00th=[ 35], 00:28:31.874 | 99.00th=[ 42], 99.50th=[ 46], 99.90th=[ 176], 99.95th=[ 176], 00:28:31.874 | 99.99th=[ 176] 00:28:31.874 bw ( KiB/s): min= 2216, max= 2864, per=4.26%, avg=2468.15, stdev=145.18, samples=20 00:28:31.874 iops : min= 554, max= 716, avg=617.00, stdev=36.31, samples=20 00:28:31.874 lat (msec) : 20=7.19%, 50=92.48%, 100=0.06%, 250=0.26% 00:28:31.874 cpu : usr=98.19%, sys=1.15%, ctx=96, majf=0, minf=36 00:28:31.874 IO depths : 1=1.1%, 2=2.3%, 4=10.3%, 8=73.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:31.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 complete : 0=0.0%, 4=91.2%, 8=4.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 issued rwts: total=6187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.874 filename2: (groupid=0, jobs=1): err= 0: pid=3137941: Fri Jul 26 14:09:57 2024 00:28:31.874 read: IOPS=615, BW=2462KiB/s (2521kB/s)(24.4MiB/10147msec) 00:28:31.874 slat (nsec): min=6544, max=86001, avg=26854.36, stdev=13975.14 00:28:31.874 clat (msec): min=11, max=174, avg=25.86, stdev= 8.66 00:28:31.874 lat (msec): min=11, max=174, avg=25.88, stdev= 8.66 00:28:31.874 clat percentiles (msec): 00:28:31.874 | 1.00th=[ 16], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 23], 00:28:31.874 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.874 | 70.00th=[ 26], 80.00th=[ 29], 90.00th=[ 32], 95.00th=[ 34], 00:28:31.874 | 99.00th=[ 42], 99.50th=[ 44], 99.90th=[ 174], 99.95th=[ 176], 00:28:31.874 | 99.99th=[ 176] 00:28:31.874 bw ( KiB/s): min= 2256, max= 2664, per=4.29%, avg=2491.35, stdev=127.08, samples=20 00:28:31.874 iops : min= 564, max= 666, avg=622.80, stdev=31.79, samples=20 00:28:31.874 lat (msec) : 20=4.13%, 50=95.61%, 250=0.26% 00:28:31.874 cpu : usr=97.11%, sys=1.77%, ctx=340, majf=0, minf=35 00:28:31.874 IO depths : 1=0.5%, 2=1.1%, 4=7.6%, 8=77.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:28:31.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 issued rwts: total=6245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.874 filename2: (groupid=0, jobs=1): err= 0: pid=3137942: Fri Jul 26 14:09:57 2024 00:28:31.874 read: IOPS=652, BW=2609KiB/s (2672kB/s)(25.9MiB/10155msec) 00:28:31.874 slat (nsec): min=6881, max=91911, avg=32087.29, stdev=14050.24 00:28:31.874 clat (msec): min=12, max=163, avg=24.27, stdev= 7.18 00:28:31.874 lat (msec): min=12, max=163, avg=24.30, stdev= 7.18 00:28:31.874 clat percentiles (msec): 00:28:31.874 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:28:31.874 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:28:31.874 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 27], 95.00th=[ 28], 00:28:31.874 | 99.00th=[ 33], 99.50th=[ 36], 99.90th=[ 161], 99.95th=[ 161], 00:28:31.874 | 99.99th=[ 163] 00:28:31.874 bw ( KiB/s): min= 2304, max= 2816, per=4.56%, avg=2642.90, stdev=132.31, samples=20 00:28:31.874 iops : min= 576, max= 704, avg=660.70, stdev=33.07, samples=20 00:28:31.874 lat (msec) : 20=3.19%, 50=96.57%, 250=0.24% 00:28:31.874 cpu : usr=98.53%, sys=1.01%, ctx=35, majf=0, minf=35 00:28:31.874 IO depths : 1=5.0%, 2=10.4%, 4=22.6%, 8=54.1%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:31.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.874 filename2: (groupid=0, jobs=1): err= 0: pid=3137943: Fri Jul 26 14:09:57 2024 00:28:31.874 read: IOPS=609, BW=2437KiB/s (2496kB/s)(24.2MiB/10155msec) 00:28:31.874 slat (usec): min=6, max=175, avg=34.03, stdev=18.14 00:28:31.874 clat (msec): min=9, max=179, avg=26.05, stdev= 8.63 00:28:31.874 lat (msec): min=9, max=179, avg=26.09, stdev= 8.63 00:28:31.874 clat percentiles (msec): 00:28:31.874 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:28:31.874 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:28:31.874 | 70.00th=[ 27], 80.00th=[ 29], 90.00th=[ 32], 95.00th=[ 35], 00:28:31.874 | 99.00th=[ 41], 99.50th=[ 45], 99.90th=[ 176], 99.95th=[ 180], 00:28:31.874 | 99.99th=[ 180] 00:28:31.874 bw ( KiB/s): min= 2248, max= 2632, per=4.26%, avg=2468.55, stdev=108.41, samples=20 00:28:31.874 iops : min= 562, max= 658, avg=617.10, stdev=27.12, samples=20 00:28:31.874 lat (msec) : 10=0.05%, 20=4.09%, 50=95.60%, 250=0.26% 00:28:31.874 cpu : usr=95.50%, sys=1.84%, ctx=71, majf=0, minf=37 00:28:31.874 IO depths : 1=1.1%, 2=2.4%, 4=10.6%, 8=73.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:28:31.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 complete : 0=0.0%, 4=90.9%, 8=4.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 issued rwts: total=6188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.874 filename2: (groupid=0, jobs=1): err= 0: pid=3137944: Fri Jul 26 14:09:57 2024 00:28:31.874 read: IOPS=574, BW=2296KiB/s (2351kB/s)(22.7MiB/10127msec) 00:28:31.874 slat (nsec): min=6142, max=92907, avg=30562.27, stdev=19113.79 00:28:31.874 clat (msec): min=10, max=158, avg=27.62, stdev= 8.30 00:28:31.874 lat (msec): min=10, max=158, avg=27.65, stdev= 8.30 00:28:31.874 clat percentiles (msec): 00:28:31.874 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:28:31.874 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 28], 00:28:31.874 | 70.00th=[ 30], 80.00th=[ 32], 90.00th=[ 35], 95.00th=[ 38], 00:28:31.874 | 99.00th=[ 45], 99.50th=[ 58], 99.90th=[ 159], 99.95th=[ 159], 00:28:31.874 | 99.99th=[ 159] 00:28:31.874 bw ( KiB/s): min= 2096, max= 2530, per=4.00%, avg=2318.20, stdev=120.88, samples=20 00:28:31.874 iops : min= 524, max= 632, avg=579.50, stdev=30.13, samples=20 00:28:31.874 lat (msec) : 20=3.34%, 50=96.11%, 100=0.28%, 250=0.28% 00:28:31.874 cpu : usr=97.41%, sys=1.33%, ctx=23, majf=0, minf=34 00:28:31.874 IO depths : 1=0.8%, 2=1.9%, 4=9.7%, 8=74.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:28:31.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.874 issued rwts: total=5813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.874 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.874 00:28:31.874 Run status group 0 (all jobs): 00:28:31.874 READ: bw=56.6MiB/s (59.4MB/s), 2234KiB/s-2701KiB/s (2288kB/s-2766kB/s), io=576MiB (604MB), run=10013-10176msec 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.874 14:09:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:31.874 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 bdev_null0 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 [2024-07-26 14:09:58.074414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 bdev_null1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.875 { 00:28:31.875 "params": { 00:28:31.875 "name": "Nvme$subsystem", 00:28:31.875 "trtype": "$TEST_TRANSPORT", 00:28:31.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.875 "adrfam": "ipv4", 00:28:31.875 "trsvcid": "$NVMF_PORT", 00:28:31.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.875 "hdgst": ${hdgst:-false}, 00:28:31.875 "ddgst": ${ddgst:-false} 00:28:31.875 }, 00:28:31.875 "method": "bdev_nvme_attach_controller" 00:28:31.875 } 00:28:31.875 EOF 00:28:31.875 )") 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.875 { 00:28:31.875 "params": { 00:28:31.875 "name": "Nvme$subsystem", 00:28:31.875 "trtype": "$TEST_TRANSPORT", 00:28:31.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.875 "adrfam": "ipv4", 00:28:31.875 "trsvcid": "$NVMF_PORT", 00:28:31.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.875 "hdgst": ${hdgst:-false}, 00:28:31.875 "ddgst": ${ddgst:-false} 00:28:31.875 }, 00:28:31.875 "method": "bdev_nvme_attach_controller" 00:28:31.875 } 00:28:31.875 EOF 00:28:31.875 )") 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:31.875 14:09:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:31.875 "params": { 00:28:31.875 "name": "Nvme0", 00:28:31.875 "trtype": "tcp", 00:28:31.875 "traddr": "10.0.0.2", 00:28:31.875 "adrfam": "ipv4", 00:28:31.875 "trsvcid": "4420", 00:28:31.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.875 "hdgst": false, 00:28:31.875 "ddgst": false 00:28:31.875 }, 00:28:31.875 "method": "bdev_nvme_attach_controller" 00:28:31.875 },{ 00:28:31.875 "params": { 00:28:31.875 "name": "Nvme1", 00:28:31.875 "trtype": "tcp", 00:28:31.875 "traddr": "10.0.0.2", 00:28:31.875 "adrfam": "ipv4", 00:28:31.875 "trsvcid": "4420", 00:28:31.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.875 "hdgst": false, 00:28:31.875 "ddgst": false 00:28:31.875 }, 00:28:31.875 "method": "bdev_nvme_attach_controller" 00:28:31.876 }' 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:31.876 14:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.876 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:31.876 ... 00:28:31.876 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:31.876 ... 00:28:31.876 fio-3.35 00:28:31.876 Starting 4 threads 00:28:31.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.192 00:28:37.192 filename0: (groupid=0, jobs=1): err= 0: pid=3139894: Fri Jul 26 14:10:04 2024 00:28:37.192 read: IOPS=2556, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5002msec) 00:28:37.192 slat (nsec): min=6172, max=41972, avg=8850.02, stdev=2773.63 00:28:37.192 clat (usec): min=1685, max=52328, avg=3106.46, stdev=1307.90 00:28:37.192 lat (usec): min=1692, max=52355, avg=3115.31, stdev=1307.95 00:28:37.192 clat percentiles (usec): 00:28:37.192 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:28:37.192 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:28:37.192 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3589], 95.00th=[ 3818], 00:28:37.192 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5669], 99.95th=[52167], 00:28:37.192 | 99.99th=[52167] 00:28:37.192 bw ( KiB/s): min=17952, max=21104, per=24.95%, avg=20446.22, stdev=957.38, samples=9 00:28:37.192 iops : min= 2244, max= 2638, avg=2555.78, stdev=119.67, samples=9 00:28:37.192 lat (msec) : 2=0.22%, 4=96.77%, 10=2.95%, 100=0.06% 00:28:37.192 cpu : usr=95.94%, sys=3.72%, ctx=8, majf=0, minf=0 00:28:37.192 IO depths : 1=0.1%, 2=0.9%, 4=66.2%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 issued rwts: total=12788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.192 filename0: (groupid=0, jobs=1): err= 0: pid=3139895: Fri Jul 26 14:10:04 2024 00:28:37.192 read: IOPS=2537, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5002msec) 00:28:37.192 slat (nsec): min=6178, max=28028, avg=8739.03, stdev=2829.29 00:28:37.192 clat (usec): min=1660, max=9526, avg=3130.81, stdev=466.98 00:28:37.192 lat (usec): min=1667, max=9551, avg=3139.55, stdev=466.98 00:28:37.192 clat percentiles (usec): 00:28:37.192 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2802], 00:28:37.192 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3163], 00:28:37.192 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3687], 95.00th=[ 3884], 00:28:37.192 | 99.00th=[ 4359], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 9110], 00:28:37.192 | 99.99th=[ 9503] 00:28:37.192 bw ( KiB/s): min=20016, max=20688, per=24.80%, avg=20325.33, stdev=211.96, samples=9 00:28:37.192 iops : min= 2502, max= 2586, avg=2540.67, stdev=26.50, samples=9 00:28:37.192 lat (msec) : 2=0.16%, 4=96.55%, 10=3.29% 00:28:37.192 cpu : usr=96.18%, sys=3.48%, ctx=7, majf=0, minf=9 00:28:37.192 IO depths : 1=0.1%, 2=1.0%, 4=65.8%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 issued rwts: total=12693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.192 filename1: (groupid=0, jobs=1): err= 0: pid=3139896: Fri Jul 26 14:10:04 2024 00:28:37.192 read: IOPS=2561, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:28:37.192 slat (nsec): min=6175, max=26639, avg=8800.39, stdev=2798.19 00:28:37.192 clat (usec): min=1587, max=7154, avg=3100.89, stdev=435.63 00:28:37.192 lat (usec): min=1593, max=7180, avg=3109.69, stdev=435.62 00:28:37.192 clat percentiles (usec): 00:28:37.192 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2769], 00:28:37.192 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:28:37.192 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 3818], 00:28:37.192 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5080], 99.95th=[ 6718], 00:28:37.192 | 99.99th=[ 7111] 00:28:37.192 bw ( KiB/s): min=19920, max=20848, per=25.01%, avg=20496.89, stdev=250.84, samples=9 00:28:37.192 iops : min= 2490, max= 2606, avg=2562.11, stdev=31.35, samples=9 00:28:37.192 lat (msec) : 2=0.13%, 4=96.96%, 10=2.91% 00:28:37.192 cpu : usr=96.28%, sys=3.36%, ctx=5, majf=0, minf=0 00:28:37.192 IO depths : 1=0.1%, 2=1.0%, 4=65.6%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 issued rwts: total=12815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.192 filename1: (groupid=0, jobs=1): err= 0: pid=3139897: Fri Jul 26 14:10:04 2024 00:28:37.192 read: IOPS=2589, BW=20.2MiB/s (21.2MB/s)(101MiB/5003msec) 00:28:37.192 slat (nsec): min=6176, max=27359, avg=8754.07, stdev=2749.78 00:28:37.192 clat (usec): min=1767, max=15748, avg=3067.29, stdev=528.79 00:28:37.192 lat (usec): min=1774, max=15775, avg=3076.04, stdev=528.84 00:28:37.192 clat percentiles (usec): 00:28:37.192 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:28:37.192 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:28:37.192 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3589], 95.00th=[ 3785], 00:28:37.192 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 5473], 99.95th=[15401], 00:28:37.192 | 99.99th=[15664] 00:28:37.192 bw ( KiB/s): min=20080, max=21296, per=25.31%, avg=20739.56, stdev=359.71, samples=9 00:28:37.192 iops : min= 2510, max= 2662, avg=2592.44, stdev=44.96, samples=9 00:28:37.192 lat (msec) : 2=0.22%, 4=97.08%, 10=2.64%, 20=0.06% 00:28:37.192 cpu : usr=96.60%, sys=3.04%, ctx=6, majf=0, minf=9 00:28:37.192 IO depths : 1=0.1%, 2=1.0%, 4=65.9%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.192 issued rwts: total=12954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.192 00:28:37.192 Run status group 0 (all jobs): 00:28:37.192 READ: bw=80.0MiB/s (83.9MB/s), 19.8MiB/s-20.2MiB/s (20.8MB/s-21.2MB/s), io=400MiB (420MB), run=5002-5003msec 00:28:37.192 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:37.192 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:37.192 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.192 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:37.192 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:37.192 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.193 00:28:37.193 real 0m24.503s 00:28:37.193 user 4m50.525s 00:28:37.193 sys 0m6.053s 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 ************************************ 00:28:37.193 END TEST fio_dif_rand_params 00:28:37.193 ************************************ 00:28:37.193 14:10:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:37.193 14:10:04 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:37.193 14:10:04 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 ************************************ 00:28:37.193 START TEST fio_dif_digest 00:28:37.193 ************************************ 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.193 bdev_null0 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.193 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.453 [2024-07-26 14:10:04.640102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:37.453 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.454 { 00:28:37.454 "params": { 00:28:37.454 "name": "Nvme$subsystem", 00:28:37.454 "trtype": "$TEST_TRANSPORT", 00:28:37.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.454 "adrfam": "ipv4", 00:28:37.454 "trsvcid": "$NVMF_PORT", 00:28:37.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.454 "hdgst": ${hdgst:-false}, 00:28:37.454 "ddgst": ${ddgst:-false} 00:28:37.454 }, 00:28:37.454 "method": "bdev_nvme_attach_controller" 00:28:37.454 } 00:28:37.454 EOF 00:28:37.454 )") 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:37.454 "params": { 00:28:37.454 "name": "Nvme0", 00:28:37.454 "trtype": "tcp", 00:28:37.454 "traddr": "10.0.0.2", 00:28:37.454 "adrfam": "ipv4", 00:28:37.454 "trsvcid": "4420", 00:28:37.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.454 "hdgst": true, 00:28:37.454 "ddgst": true 00:28:37.454 }, 00:28:37.454 "method": "bdev_nvme_attach_controller" 00:28:37.454 }' 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:37.454 14:10:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.714 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:37.714 ... 00:28:37.714 fio-3.35 00:28:37.714 Starting 3 threads 00:28:37.714 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.948 00:28:49.948 filename0: (groupid=0, jobs=1): err= 0: pid=3140959: Fri Jul 26 14:10:15 2024 00:28:49.948 read: IOPS=220, BW=27.5MiB/s (28.9MB/s)(277MiB/10049msec) 00:28:49.948 slat (nsec): min=2971, max=26527, avg=11246.62, stdev=2009.59 00:28:49.948 clat (usec): min=6360, max=68720, avg=13594.12, stdev=10474.55 00:28:49.948 lat (usec): min=6367, max=68731, avg=13605.37, stdev=10474.62 00:28:49.948 clat percentiles (usec): 00:28:49.948 | 1.00th=[ 6587], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9503], 00:28:49.949 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:28:49.949 | 70.00th=[11994], 80.00th=[12649], 90.00th=[15270], 95.00th=[51119], 00:28:49.949 | 99.00th=[58459], 99.50th=[59507], 99.90th=[68682], 99.95th=[68682], 00:28:49.949 | 99.99th=[68682] 00:28:49.949 bw ( KiB/s): min=19456, max=33792, per=32.41%, avg=28288.00, stdev=3712.12, samples=20 00:28:49.949 iops : min= 152, max= 264, avg=221.00, stdev=29.00, samples=20 00:28:49.949 lat (msec) : 10=25.90%, 20=68.35%, 50=0.27%, 100=5.47% 00:28:49.949 cpu : usr=95.33%, sys=4.25%, ctx=15, majf=0, minf=94 00:28:49.949 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.949 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.949 filename0: (groupid=0, jobs=1): err= 0: pid=3140960: Fri Jul 26 14:10:15 2024 00:28:49.949 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(353MiB/10048msec) 00:28:49.949 slat (nsec): min=4278, max=66874, avg=10803.86, stdev=2522.16 00:28:49.949 clat (usec): min=6122, max=64570, avg=10652.29, stdev=6104.84 00:28:49.949 lat (usec): min=6129, max=64583, avg=10663.10, stdev=6105.04 00:28:49.949 clat percentiles (usec): 00:28:49.949 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 7898], 00:28:49.949 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10552], 00:28:49.949 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12518], 95.00th=[13960], 00:28:49.949 | 99.00th=[52691], 99.50th=[55837], 99.90th=[64226], 99.95th=[64226], 00:28:49.949 | 99.99th=[64750] 00:28:49.949 bw ( KiB/s): min=26164, max=46080, per=41.35%, avg=36098.60, stdev=5950.12, samples=20 00:28:49.949 iops : min= 204, max= 360, avg=282.00, stdev=46.52, samples=20 00:28:49.949 lat (msec) : 10=50.82%, 20=47.52%, 50=0.07%, 100=1.59% 00:28:49.949 cpu : usr=95.06%, sys=4.50%, ctx=15, majf=0, minf=149 00:28:49.949 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.949 issued rwts: total=2822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.949 filename0: (groupid=0, jobs=1): err= 0: pid=3140961: Fri Jul 26 14:10:15 2024 00:28:49.949 read: IOPS=181, BW=22.6MiB/s (23.7MB/s)(227MiB/10045msec) 00:28:49.949 slat (nsec): min=6493, max=24450, avg=11363.92, stdev=2015.31 00:28:49.949 clat (usec): min=6216, max=97172, avg=16528.27, stdev=13899.06 00:28:49.949 lat (usec): min=6224, max=97184, avg=16539.64, stdev=13899.13 00:28:49.949 clat percentiles (usec): 00:28:49.949 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10683], 00:28:49.949 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:28:49.949 | 70.00th=[13042], 80.00th=[14091], 90.00th=[51119], 95.00th=[54264], 00:28:49.949 | 99.00th=[58983], 99.50th=[60556], 99.90th=[95945], 99.95th=[96994], 00:28:49.949 | 99.99th=[96994] 00:28:49.949 bw ( KiB/s): min=16384, max=32768, per=26.64%, avg=23257.60, stdev=5088.42, samples=20 00:28:49.949 iops : min= 128, max= 256, avg=181.70, stdev=39.75, samples=20 00:28:49.949 lat (msec) : 10=11.98%, 20=77.19%, 50=0.38%, 100=10.45% 00:28:49.949 cpu : usr=95.63%, sys=4.03%, ctx=21, majf=0, minf=134 00:28:49.949 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.949 issued rwts: total=1819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.949 00:28:49.949 Run status group 0 (all jobs): 00:28:49.949 READ: bw=85.2MiB/s (89.4MB/s), 22.6MiB/s-35.1MiB/s (23.7MB/s-36.8MB/s), io=857MiB (898MB), run=10045-10049msec 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.949 00:28:49.949 real 0m11.124s 00:28:49.949 user 0m35.641s 00:28:49.949 sys 0m1.555s 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:49.949 14:10:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.949 ************************************ 00:28:49.949 END TEST fio_dif_digest 00:28:49.949 ************************************ 00:28:49.949 14:10:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:49.949 14:10:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.949 rmmod nvme_tcp 00:28:49.949 rmmod nvme_fabrics 00:28:49.949 rmmod nvme_keyring 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3132348 ']' 00:28:49.949 14:10:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3132348 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3132348 ']' 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3132348 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3132348 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3132348' 00:28:49.949 killing process with pid 3132348 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3132348 00:28:49.949 14:10:15 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3132348 00:28:49.949 14:10:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:49.949 14:10:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:50.892 Waiting for block devices as requested 00:28:50.892 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:50.892 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:51.153 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:51.153 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:51.153 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:51.412 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:51.412 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:51.412 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:51.412 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:51.672 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:51.672 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:51.672 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:51.672 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:51.932 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:51.932 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:51.932 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:52.193 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:52.193 14:10:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:52.193 14:10:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:52.193 14:10:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:52.193 14:10:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:52.193 14:10:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.193 14:10:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:52.193 14:10:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.105 14:10:21 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:54.105 00:28:54.105 real 1m13.078s 00:28:54.105 user 7m8.852s 00:28:54.105 sys 0m19.757s 00:28:54.105 14:10:21 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.105 14:10:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:54.105 ************************************ 00:28:54.105 END TEST nvmf_dif 00:28:54.105 ************************************ 00:28:54.105 14:10:21 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:54.105 14:10:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:54.105 14:10:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.105 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:28:54.366 ************************************ 00:28:54.366 START TEST nvmf_abort_qd_sizes 00:28:54.366 ************************************ 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:54.366 * Looking for test storage... 00:28:54.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:54.366 14:10:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.659 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.659 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.659 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:59.660 00:28:59.660 --- 10.0.0.2 ping statistics --- 00:28:59.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.660 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.535 ms 00:28:59.660 00:28:59.660 --- 10.0.0.1 ping statistics --- 00:28:59.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.660 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:59.660 14:10:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:02.202 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:02.202 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:03.219 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3148708 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3148708 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3148708 ']' 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:03.219 14:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.219 [2024-07-26 14:10:30.589993] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:29:03.219 [2024-07-26 14:10:30.590037] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.219 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.480 [2024-07-26 14:10:30.646413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.480 [2024-07-26 14:10:30.728680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.480 [2024-07-26 14:10:30.728717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.480 [2024-07-26 14:10:30.728724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.480 [2024-07-26 14:10:30.728730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.480 [2024-07-26 14:10:30.728735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.480 [2024-07-26 14:10:30.728776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.480 [2024-07-26 14:10:30.728795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.480 [2024-07-26 14:10:30.729125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.480 [2024-07-26 14:10:30.729127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.051 14:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.311 ************************************ 00:29:04.311 START TEST spdk_target_abort 00:29:04.311 ************************************ 00:29:04.311 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:29:04.311 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:04.311 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:04.311 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.311 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.607 spdk_targetn1 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.607 [2024-07-26 14:10:34.331715] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.607 [2024-07-26 14:10:34.368590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:07.607 14:10:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.607 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.905 Initializing NVMe Controllers 00:29:10.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:10.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:10.905 Initialization complete. Launching workers. 00:29:10.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 4593, failed: 0 00:29:10.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1554, failed to submit 3039 00:29:10.905 success 858, unsuccess 696, failed 0 00:29:10.905 14:10:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:10.905 14:10:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.905 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.200 Initializing NVMe Controllers 00:29:14.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:14.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:14.200 Initialization complete. Launching workers. 00:29:14.200 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8666, failed: 0 00:29:14.200 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7428 00:29:14.200 success 312, unsuccess 926, failed 0 00:29:14.200 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.200 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.200 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.496 Initializing NVMe Controllers 00:29:17.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.496 Initialization complete. Launching workers. 00:29:17.496 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33403, failed: 0 00:29:17.496 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2862, failed to submit 30541 00:29:17.496 success 721, unsuccess 2141, failed 0 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.496 14:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3148708 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3148708 ']' 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3148708 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148708 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148708' 00:29:18.437 killing process with pid 3148708 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3148708 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3148708 00:29:18.437 00:29:18.437 real 0m14.265s 00:29:18.437 user 0m57.046s 00:29:18.437 sys 0m2.114s 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:18.437 ************************************ 00:29:18.437 END TEST spdk_target_abort 00:29:18.437 ************************************ 00:29:18.437 14:10:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:18.437 14:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:18.437 14:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.437 14:10:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:18.437 ************************************ 00:29:18.437 START TEST kernel_target_abort 00:29:18.437 ************************************ 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:18.437 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:20.977 Waiting for block devices as requested 00:29:20.977 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:20.977 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:20.977 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:20.977 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:20.977 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:20.977 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:21.237 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:21.237 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:21.237 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:21.237 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:21.497 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:21.497 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:21.497 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:21.757 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:21.757 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:21.757 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:22.018 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:22.018 No valid GPT data, bailing 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:22.018 00:29:22.018 Discovery Log Number of Records 2, Generation counter 2 00:29:22.018 =====Discovery Log Entry 0====== 00:29:22.018 trtype: tcp 00:29:22.018 adrfam: ipv4 00:29:22.018 subtype: current discovery subsystem 00:29:22.018 treq: not specified, sq flow control disable supported 00:29:22.018 portid: 1 00:29:22.018 trsvcid: 4420 00:29:22.018 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:22.018 traddr: 10.0.0.1 00:29:22.018 eflags: none 00:29:22.018 sectype: none 00:29:22.018 =====Discovery Log Entry 1====== 00:29:22.018 trtype: tcp 00:29:22.018 adrfam: ipv4 00:29:22.018 subtype: nvme subsystem 00:29:22.018 treq: not specified, sq flow control disable supported 00:29:22.018 portid: 1 00:29:22.018 trsvcid: 4420 00:29:22.018 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:22.018 traddr: 10.0.0.1 00:29:22.018 eflags: none 00:29:22.018 sectype: none 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:22.018 14:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:22.018 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.313 Initializing NVMe Controllers 00:29:25.313 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:25.313 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:25.313 Initialization complete. Launching workers. 00:29:25.313 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24012, failed: 0 00:29:25.313 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24012, failed to submit 0 00:29:25.313 success 0, unsuccess 24012, failed 0 00:29:25.313 14:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:25.313 14:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.313 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.648 Initializing NVMe Controllers 00:29:28.648 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:28.648 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:28.648 Initialization complete. Launching workers. 00:29:28.648 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52267, failed: 0 00:29:28.648 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13182, failed to submit 39085 00:29:28.648 success 0, unsuccess 13182, failed 0 00:29:28.648 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:28.648 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:28.648 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.187 Initializing NVMe Controllers 00:29:31.187 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:31.187 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:31.187 Initialization complete. Launching workers. 00:29:31.187 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51804, failed: 0 00:29:31.187 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12938, failed to submit 38866 00:29:31.187 success 0, unsuccess 12938, failed 0 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:31.187 14:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:33.733 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:33.733 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:34.674 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:34.674 00:29:34.674 real 0m16.234s 00:29:34.674 user 0m3.661s 00:29:34.674 sys 0m5.070s 00:29:34.674 14:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:34.674 14:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:34.674 ************************************ 00:29:34.674 END TEST kernel_target_abort 00:29:34.674 ************************************ 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.674 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.674 rmmod nvme_tcp 00:29:34.933 rmmod nvme_fabrics 00:29:34.933 rmmod nvme_keyring 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3148708 ']' 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3148708 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3148708 ']' 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3148708 00:29:34.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3148708) - No such process 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3148708 is not found' 00:29:34.933 Process with pid 3148708 is not found 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:34.933 14:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:37.472 Waiting for block devices as requested 00:29:37.472 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:37.472 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:37.472 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:37.731 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:37.731 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:37.731 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:37.731 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:37.991 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:37.991 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:37.991 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:37.991 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:38.250 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:38.250 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:38.250 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:38.510 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:38.510 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:38.510 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:38.510 14:11:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.054 14:11:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.054 00:29:41.054 real 0m46.361s 00:29:41.054 user 1m4.638s 00:29:41.054 sys 0m15.057s 00:29:41.054 14:11:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.054 14:11:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:41.054 ************************************ 00:29:41.054 END TEST nvmf_abort_qd_sizes 00:29:41.054 ************************************ 00:29:41.054 14:11:07 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:41.054 14:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:41.054 14:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.054 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:29:41.054 ************************************ 00:29:41.054 START TEST keyring_file 00:29:41.054 ************************************ 00:29:41.054 14:11:07 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:41.054 * Looking for test storage... 00:29:41.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:41.054 14:11:08 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:41.054 14:11:08 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.055 14:11:08 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.055 14:11:08 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.055 14:11:08 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.055 14:11:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.055 14:11:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.055 14:11:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.055 14:11:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:41.055 14:11:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.l1CIkYymbt 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.l1CIkYymbt 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.l1CIkYymbt 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.l1CIkYymbt 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t7No8Pi5CK 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:41.055 14:11:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t7No8Pi5CK 00:29:41.055 14:11:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t7No8Pi5CK 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.t7No8Pi5CK 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=3157382 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3157382 00:29:41.055 14:11:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:41.055 14:11:08 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3157382 ']' 00:29:41.055 14:11:08 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.055 14:11:08 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.055 14:11:08 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.055 14:11:08 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.055 14:11:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:41.055 [2024-07-26 14:11:08.246781] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:29:41.055 [2024-07-26 14:11:08.246833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157382 ] 00:29:41.055 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.055 [2024-07-26 14:11:08.300320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.055 [2024-07-26 14:11:08.379811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.626 14:11:09 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.626 14:11:09 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:41.626 14:11:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:41.626 14:11:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.626 14:11:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:41.626 [2024-07-26 14:11:09.052178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.887 null0 00:29:41.887 [2024-07-26 14:11:09.084240] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:41.887 [2024-07-26 14:11:09.084490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:41.887 [2024-07-26 14:11:09.092238] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.887 14:11:09 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:41.887 [2024-07-26 14:11:09.104269] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:41.887 request: 00:29:41.887 { 00:29:41.887 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:41.887 "secure_channel": false, 00:29:41.887 "listen_address": { 00:29:41.887 "trtype": "tcp", 00:29:41.887 "traddr": "127.0.0.1", 00:29:41.887 "trsvcid": "4420" 00:29:41.887 }, 00:29:41.887 "method": "nvmf_subsystem_add_listener", 00:29:41.887 "req_id": 1 00:29:41.887 } 00:29:41.887 Got JSON-RPC error response 00:29:41.887 response: 00:29:41.887 { 00:29:41.887 "code": -32602, 00:29:41.887 "message": "Invalid parameters" 00:29:41.887 } 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:41.887 14:11:09 keyring_file -- keyring/file.sh@46 -- # bperfpid=3157524 00:29:41.887 14:11:09 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3157524 /var/tmp/bperf.sock 00:29:41.887 14:11:09 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:41.887 14:11:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3157524 ']' 00:29:41.888 14:11:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:41.888 14:11:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.888 14:11:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:41.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:41.888 14:11:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.888 14:11:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:41.888 [2024-07-26 14:11:09.154167] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:29:41.888 [2024-07-26 14:11:09.154208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157524 ] 00:29:41.888 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.888 [2024-07-26 14:11:09.207639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.888 [2024-07-26 14:11:09.286151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.828 14:11:09 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.828 14:11:09 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:42.828 14:11:09 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:42.828 14:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:42.828 14:11:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t7No8Pi5CK 00:29:42.828 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t7No8Pi5CK 00:29:43.088 14:11:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:43.088 14:11:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:43.088 14:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.088 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.088 14:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.088 14:11:10 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.l1CIkYymbt == \/\t\m\p\/\t\m\p\.\l\1\C\I\k\Y\y\m\b\t ]] 00:29:43.088 14:11:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:43.088 14:11:10 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:43.088 14:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:43.088 14:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.088 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.347 14:11:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.t7No8Pi5CK == \/\t\m\p\/\t\m\p\.\t\7\N\o\8\P\i\5\C\K ]] 00:29:43.347 14:11:10 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:43.347 14:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:43.347 14:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.347 14:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.347 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.347 14:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.608 14:11:10 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:43.608 14:11:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:43.608 14:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.608 14:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:43.608 14:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.608 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.608 14:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:43.608 14:11:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:43.608 14:11:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:43.608 14:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:43.868 [2024-07-26 14:11:11.187560] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:43.868 nvme0n1 00:29:43.868 14:11:11 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:43.868 14:11:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:43.868 14:11:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.868 14:11:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.868 14:11:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.868 14:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.128 14:11:11 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:44.128 14:11:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:44.128 14:11:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:44.128 14:11:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:44.128 14:11:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.128 14:11:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:44.128 14:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.388 14:11:11 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:44.388 14:11:11 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.388 Running I/O for 1 seconds... 00:29:45.328 00:29:45.328 Latency(us) 00:29:45.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.328 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:45.328 nvme0n1 : 1.03 2779.40 10.86 0.00 0.00 45475.56 3177.07 72944.42 00:29:45.328 =================================================================================================================== 00:29:45.328 Total : 2779.40 10.86 0.00 0.00 45475.56 3177.07 72944.42 00:29:45.328 0 00:29:45.589 14:11:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:45.589 14:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:45.589 14:11:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:45.589 14:11:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:45.589 14:11:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.589 14:11:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.589 14:11:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:45.589 14:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.933 14:11:13 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:45.933 14:11:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:45.933 14:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:45.933 14:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.933 14:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.933 14:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:45.933 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.933 14:11:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:45.933 14:11:13 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.933 14:11:13 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.933 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:46.193 [2024-07-26 14:11:13.491810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:46.193 [2024-07-26 14:11:13.492387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c24820 (107): Transport endpoint is not connected 00:29:46.193 [2024-07-26 14:11:13.493382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c24820 (9): Bad file descriptor 00:29:46.193 [2024-07-26 14:11:13.494381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:46.193 [2024-07-26 14:11:13.494391] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:46.193 [2024-07-26 14:11:13.494397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:46.193 request: 00:29:46.193 { 00:29:46.193 "name": "nvme0", 00:29:46.193 "trtype": "tcp", 00:29:46.193 "traddr": "127.0.0.1", 00:29:46.193 "adrfam": "ipv4", 00:29:46.193 "trsvcid": "4420", 00:29:46.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.193 "prchk_reftag": false, 00:29:46.193 "prchk_guard": false, 00:29:46.193 "hdgst": false, 00:29:46.193 "ddgst": false, 00:29:46.193 "psk": "key1", 00:29:46.193 "method": "bdev_nvme_attach_controller", 00:29:46.193 "req_id": 1 00:29:46.193 } 00:29:46.193 Got JSON-RPC error response 00:29:46.193 response: 00:29:46.193 { 00:29:46.193 "code": -5, 00:29:46.193 "message": "Input/output error" 00:29:46.193 } 00:29:46.193 14:11:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:46.193 14:11:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.193 14:11:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.193 14:11:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.193 14:11:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:46.193 14:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:46.193 14:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.193 14:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.193 14:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.193 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.453 14:11:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:46.453 14:11:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:46.453 14:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:46.453 14:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.453 14:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.453 14:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:46.453 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.453 14:11:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:46.453 14:11:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:46.453 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:46.714 14:11:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:46.714 14:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:46.974 14:11:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:46.974 14:11:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:46.974 14:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.974 14:11:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:46.974 14:11:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.l1CIkYymbt 00:29:46.974 14:11:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:46.974 14:11:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:46.974 14:11:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:46.974 14:11:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:47.235 14:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:47.235 [2024-07-26 14:11:14.559339] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.l1CIkYymbt': 0100660 00:29:47.235 [2024-07-26 14:11:14.559363] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:47.235 request: 00:29:47.235 { 00:29:47.235 "name": "key0", 00:29:47.235 "path": "/tmp/tmp.l1CIkYymbt", 00:29:47.235 "method": "keyring_file_add_key", 00:29:47.235 "req_id": 1 00:29:47.235 } 00:29:47.235 Got JSON-RPC error response 00:29:47.235 response: 00:29:47.235 { 00:29:47.235 "code": -1, 00:29:47.235 "message": "Operation not permitted" 00:29:47.235 } 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:47.235 14:11:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:47.235 14:11:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.l1CIkYymbt 00:29:47.235 14:11:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:47.235 14:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l1CIkYymbt 00:29:47.495 14:11:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.l1CIkYymbt 00:29:47.495 14:11:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:47.495 14:11:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:47.495 14:11:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:47.495 14:11:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:47.495 14:11:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:47.495 14:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.756 14:11:14 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:47.756 14:11:14 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.756 14:11:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.756 14:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.756 [2024-07-26 14:11:15.084734] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.l1CIkYymbt': No such file or directory 00:29:47.756 [2024-07-26 14:11:15.084755] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:47.756 [2024-07-26 14:11:15.084774] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:47.756 [2024-07-26 14:11:15.084781] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:47.756 [2024-07-26 14:11:15.084786] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:47.756 request: 00:29:47.756 { 00:29:47.756 "name": "nvme0", 00:29:47.756 "trtype": "tcp", 00:29:47.756 "traddr": "127.0.0.1", 00:29:47.756 "adrfam": "ipv4", 00:29:47.756 "trsvcid": "4420", 00:29:47.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:47.756 "prchk_reftag": false, 00:29:47.756 "prchk_guard": false, 00:29:47.756 "hdgst": false, 00:29:47.756 "ddgst": false, 00:29:47.756 "psk": "key0", 00:29:47.756 "method": "bdev_nvme_attach_controller", 00:29:47.756 "req_id": 1 00:29:47.756 } 00:29:47.756 Got JSON-RPC error response 00:29:47.756 response: 00:29:47.756 { 00:29:47.756 "code": -19, 00:29:47.756 "message": "No such device" 00:29:47.756 } 00:29:47.756 14:11:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:47.756 14:11:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:47.756 14:11:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:47.756 14:11:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:47.756 14:11:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:47.756 14:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:48.016 14:11:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wIgu9joPIX 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:48.016 14:11:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:48.016 14:11:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:48.016 14:11:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:48.016 14:11:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:48.016 14:11:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:48.016 14:11:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wIgu9joPIX 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wIgu9joPIX 00:29:48.016 14:11:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.wIgu9joPIX 00:29:48.016 14:11:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wIgu9joPIX 00:29:48.016 14:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wIgu9joPIX 00:29:48.276 14:11:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.276 14:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.536 nvme0n1 00:29:48.536 14:11:15 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:48.536 14:11:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:48.536 14:11:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:48.536 14:11:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.536 14:11:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:48.536 14:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.536 14:11:15 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:48.536 14:11:15 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:48.536 14:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:48.796 14:11:16 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:48.796 14:11:16 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:48.796 14:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.796 14:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:48.796 14:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.055 14:11:16 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:49.055 14:11:16 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:49.055 14:11:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:49.055 14:11:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:49.055 14:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:49.056 14:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:49.056 14:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.056 14:11:16 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:49.056 14:11:16 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:49.056 14:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:49.315 14:11:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:49.315 14:11:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:49.315 14:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.575 14:11:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:49.575 14:11:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wIgu9joPIX 00:29:49.575 14:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wIgu9joPIX 00:29:49.575 14:11:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t7No8Pi5CK 00:29:49.575 14:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t7No8Pi5CK 00:29:49.836 14:11:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:49.836 14:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:50.096 nvme0n1 00:29:50.096 14:11:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:50.096 14:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:50.356 14:11:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:50.356 "subsystems": [ 00:29:50.356 { 00:29:50.356 "subsystem": "keyring", 00:29:50.356 "config": [ 00:29:50.356 { 00:29:50.356 "method": "keyring_file_add_key", 00:29:50.356 "params": { 00:29:50.356 "name": "key0", 00:29:50.356 "path": "/tmp/tmp.wIgu9joPIX" 00:29:50.356 } 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "method": "keyring_file_add_key", 00:29:50.356 "params": { 00:29:50.356 "name": "key1", 00:29:50.356 "path": "/tmp/tmp.t7No8Pi5CK" 00:29:50.356 } 00:29:50.356 } 00:29:50.356 ] 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "subsystem": "iobuf", 00:29:50.356 "config": [ 00:29:50.356 { 00:29:50.356 "method": "iobuf_set_options", 00:29:50.356 "params": { 00:29:50.356 "small_pool_count": 8192, 00:29:50.356 "large_pool_count": 1024, 00:29:50.356 "small_bufsize": 8192, 00:29:50.356 "large_bufsize": 135168 00:29:50.356 } 00:29:50.356 } 00:29:50.356 ] 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "subsystem": "sock", 00:29:50.356 "config": [ 00:29:50.356 { 00:29:50.356 "method": "sock_set_default_impl", 00:29:50.356 "params": { 00:29:50.356 "impl_name": "posix" 00:29:50.356 } 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "method": "sock_impl_set_options", 00:29:50.356 "params": { 00:29:50.356 "impl_name": "ssl", 00:29:50.356 "recv_buf_size": 4096, 00:29:50.356 "send_buf_size": 4096, 00:29:50.356 "enable_recv_pipe": true, 00:29:50.356 "enable_quickack": false, 00:29:50.356 "enable_placement_id": 0, 00:29:50.356 "enable_zerocopy_send_server": true, 00:29:50.356 "enable_zerocopy_send_client": false, 00:29:50.356 "zerocopy_threshold": 0, 00:29:50.356 "tls_version": 0, 00:29:50.356 "enable_ktls": false 00:29:50.356 } 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "method": "sock_impl_set_options", 00:29:50.356 "params": { 00:29:50.356 "impl_name": "posix", 00:29:50.356 "recv_buf_size": 2097152, 00:29:50.356 "send_buf_size": 2097152, 00:29:50.356 "enable_recv_pipe": true, 00:29:50.356 "enable_quickack": false, 00:29:50.356 "enable_placement_id": 0, 00:29:50.356 "enable_zerocopy_send_server": true, 00:29:50.356 "enable_zerocopy_send_client": false, 00:29:50.356 "zerocopy_threshold": 0, 00:29:50.356 "tls_version": 0, 00:29:50.356 "enable_ktls": false 00:29:50.356 } 00:29:50.356 } 00:29:50.356 ] 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "subsystem": "vmd", 00:29:50.356 "config": [] 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "subsystem": "accel", 00:29:50.356 "config": [ 00:29:50.356 { 00:29:50.356 "method": "accel_set_options", 00:29:50.356 "params": { 00:29:50.356 "small_cache_size": 128, 00:29:50.356 "large_cache_size": 16, 00:29:50.356 "task_count": 2048, 00:29:50.356 "sequence_count": 2048, 00:29:50.356 "buf_count": 2048 00:29:50.356 } 00:29:50.356 } 00:29:50.356 ] 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "subsystem": "bdev", 00:29:50.356 "config": [ 00:29:50.356 { 00:29:50.356 "method": "bdev_set_options", 00:29:50.356 "params": { 00:29:50.356 "bdev_io_pool_size": 65535, 00:29:50.356 "bdev_io_cache_size": 256, 00:29:50.356 "bdev_auto_examine": true, 00:29:50.356 "iobuf_small_cache_size": 128, 00:29:50.356 "iobuf_large_cache_size": 16 00:29:50.356 } 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "method": "bdev_raid_set_options", 00:29:50.356 "params": { 00:29:50.356 "process_window_size_kb": 1024, 00:29:50.356 "process_max_bandwidth_mb_sec": 0 00:29:50.356 } 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "method": "bdev_iscsi_set_options", 00:29:50.356 "params": { 00:29:50.356 "timeout_sec": 30 00:29:50.356 } 00:29:50.356 }, 00:29:50.356 { 00:29:50.356 "method": "bdev_nvme_set_options", 00:29:50.357 "params": { 00:29:50.357 "action_on_timeout": "none", 00:29:50.357 "timeout_us": 0, 00:29:50.357 "timeout_admin_us": 0, 00:29:50.357 "keep_alive_timeout_ms": 10000, 00:29:50.357 "arbitration_burst": 0, 00:29:50.357 "low_priority_weight": 0, 00:29:50.357 "medium_priority_weight": 0, 00:29:50.357 "high_priority_weight": 0, 00:29:50.357 "nvme_adminq_poll_period_us": 10000, 00:29:50.357 "nvme_ioq_poll_period_us": 0, 00:29:50.357 "io_queue_requests": 512, 00:29:50.357 "delay_cmd_submit": true, 00:29:50.357 "transport_retry_count": 4, 00:29:50.357 "bdev_retry_count": 3, 00:29:50.357 "transport_ack_timeout": 0, 00:29:50.357 "ctrlr_loss_timeout_sec": 0, 00:29:50.357 "reconnect_delay_sec": 0, 00:29:50.357 "fast_io_fail_timeout_sec": 0, 00:29:50.357 "disable_auto_failback": false, 00:29:50.357 "generate_uuids": false, 00:29:50.357 "transport_tos": 0, 00:29:50.357 "nvme_error_stat": false, 00:29:50.357 "rdma_srq_size": 0, 00:29:50.357 "io_path_stat": false, 00:29:50.357 "allow_accel_sequence": false, 00:29:50.357 "rdma_max_cq_size": 0, 00:29:50.357 "rdma_cm_event_timeout_ms": 0, 00:29:50.357 "dhchap_digests": [ 00:29:50.357 "sha256", 00:29:50.357 "sha384", 00:29:50.357 "sha512" 00:29:50.357 ], 00:29:50.357 "dhchap_dhgroups": [ 00:29:50.357 "null", 00:29:50.357 "ffdhe2048", 00:29:50.357 "ffdhe3072", 00:29:50.357 "ffdhe4096", 00:29:50.357 "ffdhe6144", 00:29:50.357 "ffdhe8192" 00:29:50.357 ] 00:29:50.357 } 00:29:50.357 }, 00:29:50.357 { 00:29:50.357 "method": "bdev_nvme_attach_controller", 00:29:50.357 "params": { 00:29:50.357 "name": "nvme0", 00:29:50.357 "trtype": "TCP", 00:29:50.357 "adrfam": "IPv4", 00:29:50.357 "traddr": "127.0.0.1", 00:29:50.357 "trsvcid": "4420", 00:29:50.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.357 "prchk_reftag": false, 00:29:50.357 "prchk_guard": false, 00:29:50.357 "ctrlr_loss_timeout_sec": 0, 00:29:50.357 "reconnect_delay_sec": 0, 00:29:50.357 "fast_io_fail_timeout_sec": 0, 00:29:50.357 "psk": "key0", 00:29:50.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.357 "hdgst": false, 00:29:50.357 "ddgst": false 00:29:50.357 } 00:29:50.357 }, 00:29:50.357 { 00:29:50.357 "method": "bdev_nvme_set_hotplug", 00:29:50.357 "params": { 00:29:50.357 "period_us": 100000, 00:29:50.357 "enable": false 00:29:50.357 } 00:29:50.357 }, 00:29:50.357 { 00:29:50.357 "method": "bdev_wait_for_examine" 00:29:50.357 } 00:29:50.357 ] 00:29:50.357 }, 00:29:50.357 { 00:29:50.357 "subsystem": "nbd", 00:29:50.357 "config": [] 00:29:50.357 } 00:29:50.357 ] 00:29:50.357 }' 00:29:50.357 14:11:17 keyring_file -- keyring/file.sh@114 -- # killprocess 3157524 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3157524 ']' 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3157524 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3157524 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3157524' 00:29:50.357 killing process with pid 3157524 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@969 -- # kill 3157524 00:29:50.357 Received shutdown signal, test time was about 1.000000 seconds 00:29:50.357 00:29:50.357 Latency(us) 00:29:50.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.357 =================================================================================================================== 00:29:50.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.357 14:11:17 keyring_file -- common/autotest_common.sh@974 -- # wait 3157524 00:29:50.618 14:11:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=3159048 00:29:50.618 14:11:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3159048 /var/tmp/bperf.sock 00:29:50.618 14:11:17 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3159048 ']' 00:29:50.618 14:11:17 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:50.618 14:11:17 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:50.618 14:11:17 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:50.618 14:11:17 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:50.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:50.618 14:11:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:50.618 "subsystems": [ 00:29:50.618 { 00:29:50.618 "subsystem": "keyring", 00:29:50.618 "config": [ 00:29:50.618 { 00:29:50.618 "method": "keyring_file_add_key", 00:29:50.618 "params": { 00:29:50.618 "name": "key0", 00:29:50.618 "path": "/tmp/tmp.wIgu9joPIX" 00:29:50.618 } 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "method": "keyring_file_add_key", 00:29:50.618 "params": { 00:29:50.618 "name": "key1", 00:29:50.618 "path": "/tmp/tmp.t7No8Pi5CK" 00:29:50.618 } 00:29:50.618 } 00:29:50.618 ] 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "subsystem": "iobuf", 00:29:50.618 "config": [ 00:29:50.618 { 00:29:50.618 "method": "iobuf_set_options", 00:29:50.618 "params": { 00:29:50.618 "small_pool_count": 8192, 00:29:50.618 "large_pool_count": 1024, 00:29:50.618 "small_bufsize": 8192, 00:29:50.618 "large_bufsize": 135168 00:29:50.618 } 00:29:50.618 } 00:29:50.618 ] 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "subsystem": "sock", 00:29:50.618 "config": [ 00:29:50.618 { 00:29:50.618 "method": "sock_set_default_impl", 00:29:50.618 "params": { 00:29:50.618 "impl_name": "posix" 00:29:50.618 } 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "method": "sock_impl_set_options", 00:29:50.618 "params": { 00:29:50.618 "impl_name": "ssl", 00:29:50.618 "recv_buf_size": 4096, 00:29:50.618 "send_buf_size": 4096, 00:29:50.618 "enable_recv_pipe": true, 00:29:50.618 "enable_quickack": false, 00:29:50.618 "enable_placement_id": 0, 00:29:50.618 "enable_zerocopy_send_server": true, 00:29:50.618 "enable_zerocopy_send_client": false, 00:29:50.618 "zerocopy_threshold": 0, 00:29:50.618 "tls_version": 0, 00:29:50.618 "enable_ktls": false 00:29:50.618 } 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "method": "sock_impl_set_options", 00:29:50.618 "params": { 00:29:50.618 "impl_name": "posix", 00:29:50.618 "recv_buf_size": 2097152, 00:29:50.618 "send_buf_size": 2097152, 00:29:50.618 "enable_recv_pipe": true, 00:29:50.618 "enable_quickack": false, 00:29:50.618 "enable_placement_id": 0, 00:29:50.618 "enable_zerocopy_send_server": true, 00:29:50.618 "enable_zerocopy_send_client": false, 00:29:50.618 "zerocopy_threshold": 0, 00:29:50.618 "tls_version": 0, 00:29:50.618 "enable_ktls": false 00:29:50.618 } 00:29:50.618 } 00:29:50.618 ] 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "subsystem": "vmd", 00:29:50.618 "config": [] 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "subsystem": "accel", 00:29:50.618 "config": [ 00:29:50.618 { 00:29:50.618 "method": "accel_set_options", 00:29:50.618 "params": { 00:29:50.618 "small_cache_size": 128, 00:29:50.618 "large_cache_size": 16, 00:29:50.618 "task_count": 2048, 00:29:50.618 "sequence_count": 2048, 00:29:50.618 "buf_count": 2048 00:29:50.618 } 00:29:50.618 } 00:29:50.618 ] 00:29:50.618 }, 00:29:50.618 { 00:29:50.618 "subsystem": "bdev", 00:29:50.618 "config": [ 00:29:50.618 { 00:29:50.618 "method": "bdev_set_options", 00:29:50.618 "params": { 00:29:50.618 "bdev_io_pool_size": 65535, 00:29:50.618 "bdev_io_cache_size": 256, 00:29:50.618 "bdev_auto_examine": true, 00:29:50.618 "iobuf_small_cache_size": 128, 00:29:50.618 "iobuf_large_cache_size": 16 00:29:50.619 } 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "method": "bdev_raid_set_options", 00:29:50.619 "params": { 00:29:50.619 "process_window_size_kb": 1024, 00:29:50.619 "process_max_bandwidth_mb_sec": 0 00:29:50.619 } 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "method": "bdev_iscsi_set_options", 00:29:50.619 "params": { 00:29:50.619 "timeout_sec": 30 00:29:50.619 } 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "method": "bdev_nvme_set_options", 00:29:50.619 "params": { 00:29:50.619 "action_on_timeout": "none", 00:29:50.619 "timeout_us": 0, 00:29:50.619 "timeout_admin_us": 0, 00:29:50.619 "keep_alive_timeout_ms": 10000, 00:29:50.619 "arbitration_burst": 0, 00:29:50.619 "low_priority_weight": 0, 00:29:50.619 "medium_priority_weight": 0, 00:29:50.619 "high_priority_weight": 0, 00:29:50.619 "nvme_adminq_poll_period_us": 10000, 00:29:50.619 "nvme_ioq_poll_period_us": 0, 00:29:50.619 "io_queue_requests": 512, 00:29:50.619 "delay_cmd_submit": true, 00:29:50.619 "transport_retry_count": 4, 00:29:50.619 "bdev_retry_count": 3, 00:29:50.619 "transport_ack_timeout": 0, 00:29:50.619 "ctrlr_loss_timeout_sec": 0, 00:29:50.619 "reconnect_delay_sec": 0, 00:29:50.619 "fast_io_fail_timeout_sec": 0, 00:29:50.619 "disable_auto_failback": false, 00:29:50.619 "generate_uuids": false, 00:29:50.619 "transport_tos": 0, 00:29:50.619 "nvme_error_stat": false, 00:29:50.619 "rdma_srq_size": 0, 00:29:50.619 "io_path_stat": false, 00:29:50.619 "allow_accel_sequence": false, 00:29:50.619 "rdma_max_cq_size": 0, 00:29:50.619 "rdma_cm_event_timeout_ms": 0, 00:29:50.619 "dhchap_digests": [ 00:29:50.619 "sha256", 00:29:50.619 "sha384", 00:29:50.619 "sha512" 00:29:50.619 ], 00:29:50.619 "dhchap_dhgroups": [ 00:29:50.619 "null", 00:29:50.619 "ffdhe2048", 00:29:50.619 "ffdhe3072", 00:29:50.619 "ffdhe4096", 00:29:50.619 "ffdhe6144", 00:29:50.619 "ffdhe8192" 00:29:50.619 ] 00:29:50.619 } 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "method": "bdev_nvme_attach_controller", 00:29:50.619 "params": { 00:29:50.619 "name": "nvme0", 00:29:50.619 "trtype": "TCP", 00:29:50.619 "adrfam": "IPv4", 00:29:50.619 "traddr": "127.0.0.1", 00:29:50.619 "trsvcid": "4420", 00:29:50.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.619 "prchk_reftag": false, 00:29:50.619 "prchk_guard": false, 00:29:50.619 "ctrlr_loss_timeout_sec": 0, 00:29:50.619 "reconnect_delay_sec": 0, 00:29:50.619 "fast_io_fail_timeout_sec": 0, 00:29:50.619 "psk": "key0", 00:29:50.619 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.619 "hdgst": false, 00:29:50.619 "ddgst": false 00:29:50.619 } 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "method": "bdev_nvme_set_hotplug", 00:29:50.619 "params": { 00:29:50.619 "period_us": 100000, 00:29:50.619 "enable": false 00:29:50.619 } 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "method": "bdev_wait_for_examine" 00:29:50.619 } 00:29:50.619 ] 00:29:50.619 }, 00:29:50.619 { 00:29:50.619 "subsystem": "nbd", 00:29:50.619 "config": [] 00:29:50.619 } 00:29:50.619 ] 00:29:50.619 }' 00:29:50.619 14:11:17 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:50.619 14:11:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:50.619 [2024-07-26 14:11:17.904969] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:29:50.619 [2024-07-26 14:11:17.905021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159048 ] 00:29:50.619 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.619 [2024-07-26 14:11:17.958808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.619 [2024-07-26 14:11:18.027029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.879 [2024-07-26 14:11:18.185159] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:51.448 14:11:18 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.448 14:11:18 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:51.448 14:11:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:51.448 14:11:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:51.449 14:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.449 14:11:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:51.449 14:11:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:51.449 14:11:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:51.449 14:11:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:51.449 14:11:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:51.449 14:11:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:51.449 14:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.708 14:11:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:51.708 14:11:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:51.708 14:11:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:51.708 14:11:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:51.708 14:11:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:51.708 14:11:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:51.708 14:11:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:51.969 14:11:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wIgu9joPIX /tmp/tmp.t7No8Pi5CK 00:29:51.969 14:11:19 keyring_file -- keyring/file.sh@20 -- # killprocess 3159048 00:29:51.969 14:11:19 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3159048 ']' 00:29:51.969 14:11:19 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3159048 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159048 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159048' 00:29:52.229 killing process with pid 3159048 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@969 -- # kill 3159048 00:29:52.229 Received shutdown signal, test time was about 1.000000 seconds 00:29:52.229 00:29:52.229 Latency(us) 00:29:52.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.229 =================================================================================================================== 00:29:52.229 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@974 -- # wait 3159048 00:29:52.229 14:11:19 keyring_file -- keyring/file.sh@21 -- # killprocess 3157382 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3157382 ']' 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3157382 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.229 14:11:19 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3157382 00:29:52.489 14:11:19 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.489 14:11:19 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.490 14:11:19 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3157382' 00:29:52.490 killing process with pid 3157382 00:29:52.490 14:11:19 keyring_file -- common/autotest_common.sh@969 -- # kill 3157382 00:29:52.490 [2024-07-26 14:11:19.672047] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:52.490 14:11:19 keyring_file -- common/autotest_common.sh@974 -- # wait 3157382 00:29:52.750 00:29:52.750 real 0m11.995s 00:29:52.750 user 0m28.034s 00:29:52.750 sys 0m2.595s 00:29:52.750 14:11:19 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.750 14:11:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:52.750 ************************************ 00:29:52.750 END TEST keyring_file 00:29:52.750 ************************************ 00:29:52.750 14:11:20 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:29:52.750 14:11:20 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:52.750 14:11:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:52.750 14:11:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.750 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:29:52.750 ************************************ 00:29:52.750 START TEST keyring_linux 00:29:52.750 ************************************ 00:29:52.750 14:11:20 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:52.750 * Looking for test storage... 00:29:52.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.750 14:11:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.750 14:11:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.750 14:11:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.750 14:11:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.750 14:11:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.750 14:11:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.750 14:11:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:52.750 14:11:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:52.750 14:11:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:52.750 14:11:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:52.750 14:11:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:53.010 14:11:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:53.010 14:11:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:53.010 /tmp/:spdk-test:key0 00:29:53.010 14:11:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:53.010 14:11:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:53.010 14:11:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:53.011 14:11:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:53.011 14:11:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:53.011 14:11:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:53.011 14:11:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:53.011 14:11:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:53.011 14:11:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:53.011 14:11:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:53.011 14:11:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:53.011 14:11:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:53.011 14:11:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:53.011 14:11:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:53.011 14:11:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:53.011 /tmp/:spdk-test:key1 00:29:53.011 14:11:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3159588 00:29:53.011 14:11:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3159588 00:29:53.011 14:11:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:53.011 14:11:20 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3159588 ']' 00:29:53.011 14:11:20 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.011 14:11:20 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.011 14:11:20 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.011 14:11:20 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.011 14:11:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:53.011 [2024-07-26 14:11:20.297942] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:29:53.011 [2024-07-26 14:11:20.297992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159588 ] 00:29:53.011 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.011 [2024-07-26 14:11:20.349268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.011 [2024-07-26 14:11:20.428850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:53.952 14:11:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:53.952 [2024-07-26 14:11:21.102236] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.952 null0 00:29:53.952 [2024-07-26 14:11:21.134296] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:53.952 [2024-07-26 14:11:21.134644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.952 14:11:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:53.952 372012414 00:29:53.952 14:11:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:53.952 578214983 00:29:53.952 14:11:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3159663 00:29:53.952 14:11:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3159663 /var/tmp/bperf.sock 00:29:53.952 14:11:21 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3159663 ']' 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:53.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.952 14:11:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:53.952 [2024-07-26 14:11:21.203225] Starting SPDK v24.09-pre git sha1 a14c64d79 / DPDK 24.03.0 initialization... 00:29:53.952 [2024-07-26 14:11:21.203267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159663 ] 00:29:53.952 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.952 [2024-07-26 14:11:21.256748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.952 [2024-07-26 14:11:21.336637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.887 14:11:22 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.887 14:11:22 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:54.887 14:11:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:54.887 14:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:54.887 14:11:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:54.887 14:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:55.146 14:11:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:55.146 14:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:55.405 [2024-07-26 14:11:22.588175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:55.405 nvme0n1 00:29:55.405 14:11:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:55.405 14:11:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:55.405 14:11:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:55.405 14:11:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:55.405 14:11:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:55.405 14:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.663 14:11:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:55.663 14:11:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:55.663 14:11:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:55.663 14:11:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:55.663 14:11:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:55.663 14:11:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:55.663 14:11:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.663 14:11:23 keyring_linux -- keyring/linux.sh@25 -- # sn=372012414 00:29:55.663 14:11:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:55.663 14:11:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:55.663 14:11:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 372012414 == \3\7\2\0\1\2\4\1\4 ]] 00:29:55.663 14:11:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 372012414 00:29:55.664 14:11:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:55.664 14:11:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:55.922 Running I/O for 1 seconds... 00:29:56.865 00:29:56.865 Latency(us) 00:29:56.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.865 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:56.865 nvme0n1 : 1.04 2361.62 9.23 0.00 0.00 53219.44 13107.20 68385.39 00:29:56.865 =================================================================================================================== 00:29:56.865 Total : 2361.62 9.23 0.00 0.00 53219.44 13107.20 68385.39 00:29:56.865 0 00:29:56.865 14:11:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:56.865 14:11:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:57.125 14:11:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:57.125 14:11:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:57.125 14:11:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:57.125 14:11:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:57.125 14:11:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:57.125 14:11:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:57.384 14:11:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:57.385 14:11:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:57.385 [2024-07-26 14:11:24.754373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:57.385 [2024-07-26 14:11:24.755060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6f770 (107): Transport endpoint is not connected 00:29:57.385 [2024-07-26 14:11:24.756055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6f770 (9): Bad file descriptor 00:29:57.385 [2024-07-26 14:11:24.757054] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.385 [2024-07-26 14:11:24.757063] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:57.385 [2024-07-26 14:11:24.757085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.385 request: 00:29:57.385 { 00:29:57.385 "name": "nvme0", 00:29:57.385 "trtype": "tcp", 00:29:57.385 "traddr": "127.0.0.1", 00:29:57.385 "adrfam": "ipv4", 00:29:57.385 "trsvcid": "4420", 00:29:57.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:57.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:57.385 "prchk_reftag": false, 00:29:57.385 "prchk_guard": false, 00:29:57.385 "hdgst": false, 00:29:57.385 "ddgst": false, 00:29:57.385 "psk": ":spdk-test:key1", 00:29:57.385 "method": "bdev_nvme_attach_controller", 00:29:57.385 "req_id": 1 00:29:57.385 } 00:29:57.385 Got JSON-RPC error response 00:29:57.385 response: 00:29:57.385 { 00:29:57.385 "code": -5, 00:29:57.385 "message": "Input/output error" 00:29:57.385 } 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@33 -- # sn=372012414 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 372012414 00:29:57.385 1 links removed 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@33 -- # sn=578214983 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 578214983 00:29:57.385 1 links removed 00:29:57.385 14:11:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3159663 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3159663 ']' 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3159663 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.385 14:11:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159663 00:29:57.645 14:11:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:57.645 14:11:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:57.645 14:11:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159663' 00:29:57.645 killing process with pid 3159663 00:29:57.645 14:11:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 3159663 00:29:57.645 Received shutdown signal, test time was about 1.000000 seconds 00:29:57.645 00:29:57.645 Latency(us) 00:29:57.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.645 =================================================================================================================== 00:29:57.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.645 14:11:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 3159663 00:29:57.645 14:11:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3159588 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3159588 ']' 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3159588 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159588 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159588' 00:29:57.645 killing process with pid 3159588 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@969 -- # kill 3159588 00:29:57.645 14:11:25 keyring_linux -- common/autotest_common.sh@974 -- # wait 3159588 00:29:58.215 00:29:58.215 real 0m5.299s 00:29:58.215 user 0m9.375s 00:29:58.215 sys 0m1.159s 00:29:58.215 14:11:25 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.215 14:11:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:58.215 ************************************ 00:29:58.215 END TEST keyring_linux 00:29:58.215 ************************************ 00:29:58.215 14:11:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:29:58.215 14:11:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:58.215 14:11:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:58.215 14:11:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:58.215 14:11:25 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:29:58.215 14:11:25 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:29:58.215 14:11:25 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:29:58.215 14:11:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.215 14:11:25 -- common/autotest_common.sh@10 -- # set +x 00:29:58.215 14:11:25 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:29:58.215 14:11:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:58.215 14:11:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:58.215 14:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:02.481 INFO: APP EXITING 00:30:02.481 INFO: killing all VMs 00:30:02.481 INFO: killing vhost app 00:30:02.481 INFO: EXIT DONE 00:30:05.027 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:05.027 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:05.027 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:05.286 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:05.286 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:05.286 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:07.829 Cleaning 00:30:07.829 Removing: /var/run/dpdk/spdk0/config 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:07.829 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:07.829 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:07.829 Removing: /var/run/dpdk/spdk1/config 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:07.829 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:07.829 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:07.829 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:07.829 Removing: /var/run/dpdk/spdk2/config 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:07.829 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:07.829 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:07.829 Removing: /var/run/dpdk/spdk3/config 00:30:07.829 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:08.090 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:08.090 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:08.090 Removing: /var/run/dpdk/spdk4/config 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:08.090 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:08.090 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:08.090 Removing: /dev/shm/bdev_svc_trace.1 00:30:08.090 Removing: /dev/shm/nvmf_trace.0 00:30:08.090 Removing: /dev/shm/spdk_tgt_trace.pid2780292 00:30:08.090 Removing: /var/run/dpdk/spdk0 00:30:08.090 Removing: /var/run/dpdk/spdk1 00:30:08.090 Removing: /var/run/dpdk/spdk2 00:30:08.090 Removing: /var/run/dpdk/spdk3 00:30:08.090 Removing: /var/run/dpdk/spdk4 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2778111 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2779206 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2780292 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2780922 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2781870 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2782110 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2783080 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2783312 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2783480 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2785077 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2786361 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2786713 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2786996 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2787309 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2787594 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2787844 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2788098 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2788373 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2789126 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2792109 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2792491 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2792771 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2792873 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2793358 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2793465 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2793863 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2794093 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2794349 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2794508 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2794630 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2794857 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2795408 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2795593 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2795911 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2799622 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2803885 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2814418 00:30:08.090 Removing: /var/run/dpdk/spdk_pid2815111 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2819374 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2819737 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2823876 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2829610 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2832353 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2842771 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2851683 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2853512 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2854578 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2871805 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2875633 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2918933 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2924318 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2930542 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2936549 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2936553 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2937469 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2938378 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2939220 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2939771 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2939785 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2940012 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2940239 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2940246 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2941143 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2941862 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2942774 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2943407 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2943462 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2943690 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2944930 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2945917 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2954728 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2978925 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2983396 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2985075 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2987157 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2987415 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2987582 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2987756 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2988623 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2990628 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2991617 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2992119 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2994276 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2994951 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2995673 00:30:08.350 Removing: /var/run/dpdk/spdk_pid2999726 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3009682 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3013708 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3019910 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3021221 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3022772 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3027099 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3031432 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3039210 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3039215 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3043695 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3043934 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3044166 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3044618 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3044626 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3049101 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3049675 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3054134 00:30:08.350 Removing: /var/run/dpdk/spdk_pid3056931 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3062293 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3067707 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3076256 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3083878 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3083940 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3101812 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3102405 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3102984 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3103682 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3104649 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3105167 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3105830 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3106526 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3110775 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3111013 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3116970 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3117129 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3119354 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3127199 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3127204 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3132621 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3134588 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3136552 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3137600 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3139574 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3140758 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3149361 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3149829 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3150492 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3152578 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3153125 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3153680 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3157382 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3157524 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3159048 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3159588 00:30:08.610 Removing: /var/run/dpdk/spdk_pid3159663 00:30:08.610 Clean 00:30:08.610 14:11:35 -- common/autotest_common.sh@1451 -- # return 0 00:30:08.610 14:11:35 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:30:08.610 14:11:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.610 14:11:35 -- common/autotest_common.sh@10 -- # set +x 00:30:08.610 14:11:36 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:30:08.610 14:11:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.610 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:08.870 14:11:36 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:08.870 14:11:36 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:08.870 14:11:36 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:08.870 14:11:36 -- spdk/autotest.sh@395 -- # hash lcov 00:30:08.870 14:11:36 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:08.870 14:11:36 -- spdk/autotest.sh@397 -- # hostname 00:30:08.870 14:11:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:08.870 geninfo: WARNING: invalid characters removed from testname! 00:30:30.824 14:11:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:31.083 14:11:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:32.990 14:12:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:34.900 14:12:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:36.810 14:12:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:38.721 14:12:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:40.661 14:12:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:40.661 14:12:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.661 14:12:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:40.661 14:12:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.661 14:12:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.661 14:12:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.661 14:12:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.661 14:12:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.661 14:12:07 -- paths/export.sh@5 -- $ export PATH 00:30:40.661 14:12:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.661 14:12:07 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:40.661 14:12:07 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:40.661 14:12:07 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721995927.XXXXXX 00:30:40.661 14:12:07 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721995927.Ye6Fzl 00:30:40.662 14:12:07 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:40.662 14:12:07 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:30:40.662 14:12:07 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:40.662 14:12:07 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:40.662 14:12:07 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:40.662 14:12:07 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:40.662 14:12:07 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:30:40.662 14:12:07 -- common/autotest_common.sh@10 -- $ set +x 00:30:40.662 14:12:07 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:40.662 14:12:07 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:40.662 14:12:07 -- pm/common@17 -- $ local monitor 00:30:40.662 14:12:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:40.662 14:12:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:40.662 14:12:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:40.662 14:12:07 -- pm/common@21 -- $ date +%s 00:30:40.662 14:12:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:40.662 14:12:07 -- pm/common@21 -- $ date +%s 00:30:40.662 14:12:07 -- pm/common@25 -- $ sleep 1 00:30:40.662 14:12:07 -- pm/common@21 -- $ date +%s 00:30:40.662 14:12:07 -- pm/common@21 -- $ date +%s 00:30:40.662 14:12:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721995927 00:30:40.662 14:12:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721995927 00:30:40.662 14:12:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721995927 00:30:40.662 14:12:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721995927 00:30:40.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721995927_collect-vmstat.pm.log 00:30:40.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721995927_collect-cpu-load.pm.log 00:30:40.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721995927_collect-cpu-temp.pm.log 00:30:40.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721995927_collect-bmc-pm.bmc.pm.log 00:30:41.602 14:12:08 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:41.602 14:12:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:41.602 14:12:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:41.602 14:12:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:41.602 14:12:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:41.602 14:12:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:41.602 14:12:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:41.602 14:12:08 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:41.602 14:12:08 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:41.602 14:12:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:41.602 14:12:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:41.602 14:12:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:41.602 14:12:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:41.602 14:12:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:41.602 14:12:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:41.602 14:12:08 -- pm/common@44 -- $ pid=3169809 00:30:41.602 14:12:08 -- pm/common@50 -- $ kill -TERM 3169809 00:30:41.602 14:12:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:41.602 14:12:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:41.602 14:12:08 -- pm/common@44 -- $ pid=3169810 00:30:41.602 14:12:08 -- pm/common@50 -- $ kill -TERM 3169810 00:30:41.602 14:12:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:41.602 14:12:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:41.602 14:12:08 -- pm/common@44 -- $ pid=3169812 00:30:41.602 14:12:08 -- pm/common@50 -- $ kill -TERM 3169812 00:30:41.602 14:12:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:41.602 14:12:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:41.602 14:12:08 -- pm/common@44 -- $ pid=3169835 00:30:41.602 14:12:08 -- pm/common@50 -- $ sudo -E kill -TERM 3169835 00:30:41.602 + [[ -n 2674324 ]] 00:30:41.602 + sudo kill 2674324 00:30:41.611 [Pipeline] } 00:30:41.630 [Pipeline] // stage 00:30:41.635 [Pipeline] } 00:30:41.653 [Pipeline] // timeout 00:30:41.659 [Pipeline] } 00:30:41.675 [Pipeline] // catchError 00:30:41.681 [Pipeline] } 00:30:41.699 [Pipeline] // wrap 00:30:41.706 [Pipeline] } 00:30:41.745 [Pipeline] // catchError 00:30:41.755 [Pipeline] stage 00:30:41.757 [Pipeline] { (Epilogue) 00:30:41.772 [Pipeline] catchError 00:30:41.774 [Pipeline] { 00:30:41.787 [Pipeline] echo 00:30:41.788 Cleanup processes 00:30:41.792 [Pipeline] sh 00:30:42.078 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:42.078 3169922 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:42.078 3170473 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:42.093 [Pipeline] sh 00:30:42.382 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:42.382 ++ grep -v 'sudo pgrep' 00:30:42.382 ++ awk '{print $1}' 00:30:42.382 + sudo kill -9 3169922 00:30:42.395 [Pipeline] sh 00:30:42.681 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:52.687 [Pipeline] sh 00:30:52.975 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:52.975 Artifacts sizes are good 00:30:52.990 [Pipeline] archiveArtifacts 00:30:52.998 Archiving artifacts 00:30:53.156 [Pipeline] sh 00:30:53.445 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:53.461 [Pipeline] cleanWs 00:30:53.471 [WS-CLEANUP] Deleting project workspace... 00:30:53.471 [WS-CLEANUP] Deferred wipeout is used... 00:30:53.478 [WS-CLEANUP] done 00:30:53.480 [Pipeline] } 00:30:53.500 [Pipeline] // catchError 00:30:53.512 [Pipeline] sh 00:30:53.797 + logger -p user.info -t JENKINS-CI 00:30:53.808 [Pipeline] } 00:30:53.824 [Pipeline] // stage 00:30:53.829 [Pipeline] } 00:30:53.846 [Pipeline] // node 00:30:53.852 [Pipeline] End of Pipeline 00:30:53.889 Finished: SUCCESS